You are on page 1of 177

KEY ARTICLES

Refuting Evolution 2 - Argument: Common design points to common ancestry .4


Genetics and creation demographic events .....5
Evolutionists abandon the idea of 99% DNA similarity between humans and chimps ..6
DNA: marvellous messages or mostly mess? ..6

DO GENE DUPLICATION AND POLYPLOIDY PROVIDE A MECHANISM FOR EVOLUTION

Do new functions arise by gene duplication? ...9


Does gene duplication provide the engine for evolution? .14
Dawkins and the origin of genetic information

..17

WHAT ABOUT JUNK DNA

Junk DNA: evolutionary discards or Gods tools? .18


The slow, painful death of junk DNA 26
No joy for junkies .27
Large scale function for endogenous retroviruses ...28
Hox (homeobox) GenesEvolutions Saviour? ..29
Hox Hype ..29

HOW DOES GENETICS POINT TO DESIGN

Cell systemswhats really under the hood continues to drop jaws ...30
Meta-information .32
Splicing and dicing the human genome ...33
Genetics: no friend of evolution 34
Astonishing DNA complexity uncovered ..36
Astonishing DNA complexity update .37
Evidence for the design of life: part 1Genetic redundancy ..37
The design of life: part 3an introduction to variation-inducing genetic elements ...40
The design of life: part 4variation-inducing genetic elements and their function ...45

INFORMATION THEORY

Refuting EvolutionChapter 9 Is the design explanation legitimate? .50


Scientific laws of information and their implicationspart 1 .52
Implications of the scientific laws of informationpart 2 .. 57
Variation, information and the created kind .60

ARE THERE EVOLUTIONARY PROCESSES THAT LEAD TO INFORMTION INCREAS?

Bears across the world ..63

Was Dawkins Stumped? .65

The adaptation of bacteria to feeding on nylon waste 67

New plant coloursis this new information? ..68

Is antibiotic resistance really due to increase in information? ...69


WHAT IS THE DIFFERENCES BETWEEN ORDER AND COMPLEXITY

The treasures of the snow ..71


HOW DOES INFORMATION THEORY SUPPORT CREATION?

Information, science and biology ... 73

The marvellous message molecule .78

More or less information? / Has a recent experiment proved creation? .79

Lifes irreducible structurePart 1: autopoiesis ..81


DNA INFORMATION

Information Theorypart 1: overview of key ideas ... 85


Information Theorypart 2: weaknesses in current conceptual frameworks 89
Information Theorypart 3: introduction to Coded Information Systems ...93
Information Theorypart 4: fundamental theorems of Coded Information Systems Theory ..96
Genetic code optimisation: Part 1 ..101
Evidence for the design of life: part 1Genetic redundancy .108

Evidence for the design of life: part 2Baranomes 111


The design of life: part 3an introduction to variation-inducing genetic elements .116
The design of life: part 4variation-inducing genetic elements and their function .121
And then there was life ..126
Cell systemswhats really under the hood continues to drop jaws ..126
Transposon amplification in rapid intrabaraminic diversification 128
Myriad mechanisms of Gene regulation 133
More marvellous machinery: DNA scrunching .134
The genetic puppeteer ..135

MUTATIONS

Can mutations create new information? . 137


Refuting Evolution 2 - Argument: Some mutations are beneficial .141
Beetle bloopers ..142
The evolution trains a-comin ..143
Ancon sheep: just another loss mutateon 144
Bacteria evolving in the lab? .145
A cat with four ears (not nine lives) .. 146
Sickle-cell anemia does not prove evolution! ...147
Evolution of a new master race? 148
The 'werewolf' gene . 148
Evolution in a Petri dish? .149
Breathtaking new frog surprise ...149

CAN MUTATION BE THE MECHANISM FOR EVOLUTION?

Hox (homeobox) GenesEvolutions Saviour? 150


Gain-of-function mutations: at a loss to explain molecules-to-man evolution ..150
Are gain of function mutations really downhill and so not supporting of evolution? ..152

ARE MUTATIONS EVER BENEFICIAL?

DAILY

CCR5-delta32: a very beneficial mutation

...153

The mutant feather-duster budgie .154


Lost World of Mutants discovered .154
New eyes for blind cave fish? .155
Christopher Hitchensblind to salamander reality ..156
Cant drink milk? Youre normal! ..158
At last, a good mutation? ..159
A-I Milano mutationevidence for evolution? ...159
Special tools of life .160

Mutations: evolutions engine becomes evolutions end! . 162


Meiotic recombinationdesigned for inducing genomic change ...166
Teenage mutant ninja people ..168
Critic ignores reality of Genetic Entropy . 169
Genetic entropy and simple organisms ..172
The diminishing returns of beneficial mutations 174
Pesticide resistance is not evidence of evolution .175

KEY ARTICLES
Refuting Evolution 2
A sequel to Refuting Evolution that refutes the latest arguments to support evolution (as presented by PBS and Scientific
American).
by Jonathan Sarfati, Ph.D. with Michael Matthews
Argument: Common design points to common ancestry
Evolutionists say, Studies have found amazing similarities in DNA and biological systemssolid evidence that life on earth
has a common ancestor.
Common structures = common ancestry?
In most arguments for evolution, the debater assumes that common physical features, such as five fingers on apes and
humans, point to a common ancestor in the distant past. Darwin mocked the idea (proposed by Richard Owen on the PBS
dramatization of his encounter with Darwin) that common structures (homologies) were due to a common designer rather
than a common ancestor.But the common Designer explanation makes much more sense of the findings of modern
geneticists, who have discovered just how different the genetic blueprint can be behind many apparent similarities in the
anatomical structures that Darwin saw. Genes are inherited, not structures per se. So one would expect the similarities, if
they were the result of evolutionary common ancestry, to be produced by a common genetic program (this may or may not
be the case for common design). But in many cases, this is clearly not so. Consider the example of the five digits of both
frogs and humansthe human embryo develops a ridge at the limb tip, then material between the digits dissolves; in frogs,
the digits grow outward from buds (see diagram below). This argues strongly against the common ancestry evolutionary
explanation for the similarity.
Development of human and frog digits
Stylized diagram showing the difference in developmental patterns of frog and human
digits.

p.

Left: In humans, programmed cell death (apoptosis) divides the ridge into five regions
that then develop into digits (fingers and toes). [From T.W. Sadler, editor, Langmans
Medical Embryology, 7th ed. (Baltimore, MD: Williams and Wilkins, 1995), p. 154157.]
Right: In frogs, the digits grow outward from buds as cells divide. [From M.J.
Tyler, Australian Frogs: A Natural History (Sydney, Australia: Reed New Holland, 1999),
80.]

The PBS program and other evolutionary propagandists claim that the DNA code is universal, and proof of a common
ancestor. But this is falsethere are exceptions, some known since the 1970s, not only in mitochondrial but also nuclear
DNA sequencing. An example is Paramecium, where a few of the 64 codons code for different amino acids. More examples
are being found constantly.1 The Discovery Institute has pointed out this clear factual error in the PBS program. 2 Also, some
organisms code for one or two extra amino acids beyond the main 20 types. 3The reaction by the PBS spokeswoman,

Eugenie Scott, showed how the evolutionary establishment is more concerned with promoting evolution than scientific
accuracy. Instead of conceding that the PBS show was wrong, she attacked the messengers, citing statements calling their
(correct!) claim so bizarre as to be almost beyond belief. Then she even implicitly conceded the truth of the claim by citing
this explanation: Those exceptions, however, are known to have derived from organisms that had the standard code.To
paraphrase: It was wrong to point out that there really are exceptions, even though its true; and it was right for PBS to imply
something that wasnt true because we can explain why its not always true.But assuming the truth of Darwinism as
evidence for their explanation is begging the question. There is no experimental evidence, since we lack the DNA code of
these alleged ancestors. There is also the theoretical problem that if we change the code, then the wrong proteins would be
made, and the organism would dieso once a code is settled on, were stuck with it. The Discovery Institute also
demonstrated the illogic of Scotts claim.4 Certainly most of the code is universal, but this is best explained by common
design. Of all the millions of genetic codes possible, ours, or something almost like it, is optimal for protecting against
errors.5 But the exceptions thwart evolutionary explanations.
DNA comparisonssubject to interpretation
Scientific American repeats the common argument that DNA comparisons help scientists to reconstruct the evolutionary
development of organisms:Macroevolution studies how taxonomic groups above the level of species change. Its evidence
draws frequently from the fossil record and DNA comparisons to reconstruct how various organisms may be related. [SA 80]
DNA comparisons are just a subset of the homology argument, which makes just as much sense in a young age framework.
A common Designer is another interpretation that makes sense of the same data. An architect commonly uses the same
building material for different buildings, and a car maker commonly uses the same parts in different cars. So we shouldnt be
surprised if a Designer for life used the same biochemistry and structures in many different creatures. Conversely, if all living
organisms were totally different, this might look like there were manydesigners instead of one.Since DNA codes for
structures and biochemical molecules, we should expect the most similar creatures to have the most similar DNA. Apes and
humans are both mammals, with similar shapes, so both have similar DNA. We should expect humans to have more DNA
similarities with another mammal like a pig than with a reptile like a rattlesnake. And this is so. Humans are very different
from yeast but they have some biochemistry in common, so we should expect human DNA to differ more from yeast DNA
than from ape DNA.So the general pattern of similarities need not be explained by common-ancestry (evolution).
Furthermore, there are some puzzling anomalies for an evolutionary explanationsimilarities between organisms that
evolutionists dont believe are closely related. For example, hemoglobin, the complex molecule that carries oxygen in blood
and results in its red color, is found in vertebrates. But it is also found in some earthworms, starfish, crustaceans, mollusks,
and even in some bacteria. An antigen receptor protein has the same unusual single chain structure in camels and nurse
sharks, but this cannot be explained by a common ancestor of sharks and camels. 6 And there are many other examples of
similarities that cannot be due to evolution.
Debunking the molecular clock
Scientific American repeats the common canard that DNA gives us a molecular clock that tells us the history of DNAs
evolution from the simplest life form to mankind:Nevertheless, evolutionists can cite further supportive evidence from
molecular biology. All organisms share most of the same genes, but as evolution predicts, the structures of these genes and
their products diverge among species, in keeping with their evolutionary relationships. Geneticists speak of the molecular
clock that records the passage of time. These molecular data also show how various organisms are transitional within
evolution. [SA 83]Actually, the molecular clock has many problems for the evolutionist. Not only are there the anomalies and
common Designer arguments I mentioned above, but they actually support a creation of distinct types within ordered
groups, not continuous evolution, as non-creationist microbiologist Dr Michael Denton pointed out in Evolution: A Theory in
Crisis. For example, when comparing the amino acid sequence of cytochrome C of a bacterium (a prokaryote) with such
widely diverse eukaryotes as yeast, wheat, silkmoth, pigeon, and horse, all of these have practically the same percentage
difference with the bacterium (64 69%). There is no intermediate cytochrome between prokaryotes and eukaryotes, and no
hint that the higher organism such as a horse has diverged more than the lower organism such as the yeast.The same
sort of pattern is observed when comparing cytochrome C of the invertebrate silkmoth with the vertebrates lamprey, carp,
turtle, pigeon, and horse. All the vertebrates are equally divergent from the silkmoth (2730%). Yet again, comparing globins
of a lamprey (a primitive cyclostome or jawless fish) with a carp, frog, chicken, kangaroo, and human, they are all about
equidistant (7381%). Cytochrome Cs compared between a carp and a bullfrog, turtle, chicken, rabbit, and horse yield a
constant difference of 1314%. There is no trace of any transitional series of cyclostome fish amphibian reptile
mammal or bird.Another problem for evolutionists is how the molecular clock could have ticked so evenly in any given
protein in so many different organisms (despite some anomalies discussed earlier which present even more problems). For
this to work, there must be a constant mutation rate per unit time over most types of organism. But observations show that
there is a constant mutation rate per generation, so it should be much faster for organisms with a fast generation time, such
as bacteria, and much slower for elephants. In insects, generation times range from weeks in flies to many years in cicadas,
and yet there is no evidence that flies are more diverged than cicadas. So evidence is against the theory that the observed
patterns are due to mutations accumulating over time as life evolved.
Genetics and creation demographic events
by C.W. Nelson
With the relatively recent mapping of the human genome, 1 new questions can be raised concerning potential genetic
evidence for creation events (specifically demographic events; that is, events affecting population) such as Creation and the
global Flood. Evidence for a Mitochondrial Eve2,3 suggests that the historical record of one man and one woman at the
beginning might be accurate, and this idea has already been discussed in the context of creation. 4 When actual measured
mutation rates are used with the mitochondrial DNA data, the time frame for Mitochondrial Eve reduces to fit with the biblical
Eve.5,6 Single nucleotide polymorphisms and linkage disequilibrium also provide relevant data concerning past populations,
and could serve as quite objective evidence for such demographic events as a global flood, for instance. I outline a number
of research findings and ideas here.
Genetic variation and the population bottleneck
By comparing DNA from different humans around the world, it has been found that all humans share roughly 99.9% of their
genetic materialthey are almost completely identical, genetically.7 This means that there is very little polymorphism, or
variation. Much evidence of this genetic continuity has been found. For example, Dorit et al.8examined a 729-base pair
intron (the DNA in the genome that is not read to make proteins) from a worldwide sample of 38 human males and
reported no sequence variation. This sort of invariance likely results from either a recent selective sweep, a recent
origin for modern Homo sapiens, recurrent male population bottlenecks, or historically small effective male population sizes
any value of Q [lowest actual human sequence diversity] > 0.0011 predicts polymorphism in our sample [and yet none

was found] . The critical value for this study thus falls below most, but not all, available estimates, thus suggesting that
the lack of polymorphism at ZFY [a locus, or location] is not due to chance.After citing additional evidence of low variation
on the Y chromosome, they note in their last paragraph that their results are not compatible with most multiregional models
for the origin of modern humans. Knight et al.9 have had similar research results:We obtained over 55 kilobases of
sequence from three autosomal loci encompassing Alu repeats for representatives of diverse human populations as well as
orthologous sequences for other hominoid species at one of these loci. Nucleotide diversity was exceedingly low. Most
individuals and populations were identical. Only a single nucleotide difference distinguished presumed ancestral alleles
from descendants. These results differ from those expected if alleles from divergent archaic
populations were maintained through multiregional continuity. The observed virtual lack of
sequence polymorphism is the signature of a recent single origin for modern humans, with
general replacement of archaic populations.These results are quite consistent with a recent
human origin and a global flood. Evolutionary models of origins did not predict such low
human genetic diversity. Mutations should have produced much more diversity than 0.1%
over millions of years. And yet this is exactly what we would expect to find if all humans were
closely related and experienced a relatively recent event in which only a few survived.
Research is needed to determine what variation should actually be present in the human
genomewhat would we expect within an evolutionary framework, and how does that
compare with what we find? These results could have a great impact on biological evolution,
population genetics, and could provide telling results about the age of the humankind. It
could also affect the so-called molecular clock.Another piece of evidence involves single
nucleotide polymorphisms (hereafter SNPs), which are mutations common to the human
genome (meaning that many humans share them), being present in the human population at
a frequency of roughly 1%.7 These provide great insight into both medical research and
population genetics. Many humans share large blocks of SNPs (called haplotypes),
suggesting that all humans could have descended from a relatively recent demographic
event.
Linkage disequilibrium (or LD) supports the same conclusion. Genes are located on
chromosomes in cells, and these genes may either be far away or close to each other. All of
the genes that are located on one chromosome are said to be linked. When cells divide
through meiosis, crossing over (or genetic recombination) often occurs. This involves two
chromosomes aligning and swapping segments of DNA, resulting in genes getting shuffled
around. The closer two genes are together, the more likely that they will be inherited together
because they are close, it is unlikely that they will be separated during crossing
over. When this holds true, genes are said to be in linkage disequilibriuma state where
they are not thoroughly mixed, but tend to be inherited together.10 Likewise, if
genes are thoroughly mixed, they are in equilibrium. LD has provided much evidence for a
population bottleneck, because humans contain long-range LD, or LD that extends quite far
in the genome, meaning that many genes tend to be inherited together. This type of
evidence has been found in Northern Europe, for example. In fact, data gathered by
Reich et al.,11 suggests that in general, blocks of LD are large in humans, because many
genes are closely associated. The explanation of this can have significant implications:
Crossing
over
(or
Why does LD extend so far? LD around an allele [or variant form of a gene] arises because
genetic recombination)
of selection or population historya small population size, genetic drift or population mixture
during meiosis, results
and decays owing to recombination [crossing over], which breaks down ancestral
in shuffled genes.
haplotypes [blocks of SNPs]. The extent of LD decreases in proportion to the number of
generations since the LD-generating event. The simplest explanation for the observed longrange LD [such as what we find in humans] is that the population under study experienced an extreme founder effect or
bottleneck: a period when the population was so small that a few ancestral haplotypes gave rise to most of the haplotypes
that exist today.11This study concluded with the possibility that 50 individuals may have founded the entire population of
Europe. This evidence is also quite consistent with a historical global flood. Research is needed on the implications of this
data for the flood. Certainly, humankind has undergone a relatively recent (tens of thousands of years at most, within an
evolutionary time frame) population bottleneck. However, it must be further investigated as to the proportionality of
evolutionary dates to the creation model, and as to how the molecular clock can be adequately explained in such a context.
Data aiding this understanding has already been published.5We should also seek to understand genetic evidence in the
context of the tower of Babel event. Evidence exists that, after the bottleneck, the [human] population rebounded in a
series of separate, rapid expansions on different continents.12

Evolutionists abandon the idea of 99% DNA similarity between humans and chimps
by Daniel Anderson
In a recent Science article, several evolutionary scientists openly admitted that the claim
of 99% DNA similarity between humans and chimpanzees is a myth. 1 Since 1975, this
misleading statistic has been touted (e.g., see box) as clear cut evidence that humans
and chimps are closely related on the evolutionary tree of life. 2 However, more and more
genetic research has revealed that the percentage of DNA similarity has been vastly
overstated.
Revealing quotes
Author Jon Cohen wrote But truth be told the 1% difference wasnt the whole story
Cohen also wrote about recent studies raising the question of whether the 1% truism
should be retired.UCSD zoologist, Pascal Gagneux, said For many, many years, the 1%
difference served us well because it was underappreciated how similar we were. Now its

totally clear that its more a hindrance for understanding than a help.Svante Pbo, after admitting he didnt think there was
any way to actually calculate a precise percentage difference, said In the end, its a political and social and cultural thing
about how we see our differences. In other words, as creationists have long stated, scientific interpretations are often driven
by philosophical presuppositions.
More recent studies highlight greater genetic differences
Last year, a study of gene copy numbers revealed a 6.4% difference. 3In 2005, scientists discovered that the chimpanzee
genome was 12% larger than the human genome.In 2003, scientists calculated a 13.3% difference in sections of our
immune systems.4One study has even revealed a 17.4% difference in gene expression in the cerebral cortex. 5Creation
geneticist, Dr Rob Carter, recently stated on the USA-nationally syndicated Janet Parshall Show, that our genomes are at
least 812% different.
Another icon falls by the wayside
Just this year, we have witnessed the downfall of two icons of evolution. Not only has the idea of 99% genetic similarity
between humans and chimps been abandoned, but also the myth of so-called Junk DNA has been debunked
see Astonishing DNA complexity uncovered and Astonishing DNA complexity Update. As has so often been observed since
Darwin published his Origin of Species, evolutionary icons eventually collapse under the weight of the empirical data.
DNA: marvellous messages or mostly mess?
by Jonathan Sarfati
2003 is the 50th anniversary of the discovery of the double helix structure of DNA. Its discoverers, James Watson, Francis
Crick and Maurice Wilkins, won the Nobel Prize for Physiology and Medicine in 1962 for their discovery. [2011 update: this
online version has been updated with animations and links to further amazing discoveries about the multiple codes in
DNA.]The amazing design and complexity of living things provides strong evidence for a Designer.
Information technology
One aspect of this sustenance is that the recipe for all these structures on the famous double-helix molecule DNA were
programed.1 This recipe has an enormous information content, which is transmitted one generation to the next, so that living
things reproduce after their kinds .Leading atheistic evolutionist Richard Dawkins admits:[T]here is enough information
capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it, three or four times over. 2Just as
the Britannica had intelligent writers to produce its information, so it is reasonable and even scientific to believe that the
information in the living world likewise had an original compositor/sender. 3 There is no known non-intelligent cause that has
ever been observed to generate even a small portion of the literally encyclopedic information required for life. 4The genetic
code (see The programs of life below) is not an outcome of raw chemistry, but of elaborate decoding machinery in
the ribosome. Remarkably, this decoding machinery is itself
encoded in the DNA, and the noted philosopher of science Sir
Karl Popper pointed out:Thus the code can not be translated
The unity of life
except by using certain products of its translation. This
Many evolutionists claim that the DNA code is
constitutes a baffling circle; a really vicious circle, it seems, for
universal, and that this is proof of a common
any attempt to form a model or theory of the genesis of the
ancestor. But this is falsethere are exceptions,
5,6
genetic code.
some known since the 1970s. An example is
So, such a system must be fully in place before it could work at
Paramecium, where a few of the 64 (4 or 4x4x4)
all, a property called irreducible complexity. This means that it is
possible codons code for different amino acids.
impossible to be built by natural selection working on small
More examples are being found constantly.1 Also,
changes.DNA is by far the most compact information storage
some organisms code for one or two extra amino
system in the universe. Even the simplest known living organism
acids beyond the main 20 types. 2 But if one
has 482 protein-coding genes. This is a total of 580,000 letters, 7
organism evolved into another with a different
humans have three billion in every nucleus. (See The
code, all the messages already encoded would be
programs of life, for an explanation of the DNA letters.)The
scrambled, just as written messages would be
amount of information that could be stored in a pinheads volume
jumbled if typewriter keys were switched. This is a
of DNA is equivalent to a pile of paperback books 500 times as
huge problem for the evolution of one code into
high as the distance from Earth to the moon, each with a
another.Also, in our cells we have power plants
8
different, yet specific content. Putting it another way, while we
called mitochondria, with their own genes. It turns
think that our new 40 gigabyte hard drives are advanced
out that they have a slightly different genetic code,
technology, a pinhead of DNA could hold 100 million times more
too.Certainly most of the code is universal, but this
information.The letters of DNA have another vital property due to
is best explained by common designone
their structure, which allows information to be transmitted: A pairs
designer. Of all the millions of genetic codes
only with T, and C only with G, due to the chemical structures of
possible, ours, or something almost like it, is
the basesthe pair is like a rung or step on a spiral staircase.
optimal for protecting against errors. 3 But the
This means that the two strands of the double helix can be
created exceptions thwart attempts to explain the
separated, and new strands can be formed that copy the
organisms by common-ancestry evolution.
information exactly. The new strand carries the same information
as the old one, but instead of being like a photocopy, it is in a
sense like a photographic negative. The copying is far more
precise than pure chemistry could manageonly about 1 mistake in 10 billion copyings, because there is editing (proofreading and error-checking) machinery, again encoded in the DNA. But how would the information for editing machinery be
transmitted accurately before the machinery was in place? Lest it be argued that the accuracy could be achieved stepwise
through selection, note that a high degree of accuracy is needed to prevent error catastrophethe accumulation of noise
in the form of junk proteins. Again there is a vicious circle (more irreducible complexity).Also, even the choice of the letters
A, T, G and C now seems to be based on minimizing error. Evolutionists usually suppose that these letters happened to be
the ones in the alleged primordial soup, but research shows that C (cytosine) is extremely unlikely to have been present in
any such soup.9 Rather, Dnall Mac Dnaill of Trinity College Dublin suggests that the letter choice is like the advanced
error-checking systems that are incorporated into ISBNs on books, credit card numbers, bank accounts and airline tickets.
Any alternatives would suffer error catastrophe.10
Introns

DNA is not read directly, but first the cell makes a negative copy in a very similar molecule called RNA, 11 a process
called transcription. But in all organisms other than most bacteria, there is more to transcription. This RNA, reflecting the
DNA, contains regions called exons that code for proteins, and non-coding regions called introns. So the introns are
removed and the exons are spliced together to form the mRNA (messenger RNA) that is finally decoded to form the
protein. This also requires elaborate machinery called a spliceosome. This is assembled on the intron, chops it out at the
right place and joins the exons together (see also this animation of the spliceosome machinery). This must be in the right
direction and place, because, as shown above, it makes a huge difference if the exon is joined even one letter off. Thus,
partly formed splicing machinery would be harmful, so natural selection would work against it. Richard Roberts and Phillip
Sharp won the 1993 Nobel Prize in Physiology and Medicine for discovering introns in 1977. It turns out that 9798% of the
genome may be introns and other non-coding sequences, but this raises the question of why introns exist at all. [Update,
2011: now we know there is a splicing code; see related
articles below.]
Junk DNA?
Dawkins and others have claimed that this non-coding
DNA is junk, or selfish DNA. Supposedly, no intelligent
designer would use such an inefficient system, therefore it
must have evolved, they argue. This parallels the
19th century claim that about a hundred vestigial organs
exist in the human body,12 i.e. allegedly useless remnants
of our evolutionary history.13 But more enlightened
evolutionists such as Scadding pointed out that the
argument is logically invalid, because it is impossible in
principle to prove that an organ has no function; rather, it
could have a function we dont know about. Scadding also
reminds us that as our knowledge has increased the list of
vestigial structures has decreased.14,15,16While Dawkins
has often claimed that belief in a designer is a cop-out,
its claims of vestigial or junk status that are actually copouts. Such claims hindered research into the vital function
of allegedly vestigial organs, and they do the same with non-coding DNA. Actually, even if evolution were true, the notion
that the introns are useless is absurd. Why would more complex organisms evolve such elaborate machinery to splice
them? Rather, natural selection would favour organisms that did not have to waste resources processing a genome filled
with 98% junk. And there have been many uses discovered for so-called junk DNA, such as the overall genome structure
and regulation of genes. Some creationists believe that this DNA has a role in rapid post-Flood diversification of the
kinds.17Some non-coding RNAs called microRNAs (miRNAs) seem to regulate the production of proteins coded in other
genes, and seem to be almost identical in humans, mice and zebrafish. The recent sequencing of the mouse
genome18surprised researchers and led to headlines such as Junk DNA Contains Essential Information. 19 They found that
5% of the genome was basically identical but only 2% of that was actual genes. So they reasoned that the other 3% must
also be identical for a reason. The researchers believe the 3% probably has a crucial role in determining the behaviour of
the actual genes, e.g. the order in which they are switched on. 20Also, damage to introns can be disastrousin one example,
deleting four letters in the centre of an intron prevented the spliceosome from binding to it, resulting in the intron being
included.21 Mutations in introns also interfere with imprinting, the process by which only certain genes from the mother or
father are expressed, not both. Expression of both genes results in a variety of diseases and cancers. 22Another intriguing
discovery is that DNA can conduct electrical signals as far as 60 letters, enough to code for 20 amino acids. This is a typical
length for molecular switches that turn on adjoining genes. Theoretically, the electrical signals could travel indefinitely.
However, single or multiple pairings between A and T stop the signals; that is, they are insulators or electronic hinges in a
circuit. So, although these particular regions dont code for proteins, they may protect essential genes from electrical
damage from free radicals attacking a distant part of the DNA. 23 So times have changedAlexander Httenhofer of the
University of Mnster, Germany, says:
Five or six years ago, people said we were wasting our time. Today, no one regards people studying non-coding RNA as
time-wasters.24
Advanced operating system?
Dr John Mattick of the University of Queensland in Brisbane,
More than just a super hard drive
Australia, has published a number of papers arguing that the
Actually, DNA is far more complicated than simply
non-coding DNA regions, or rather their non-coding RNA
coding for proteins, as we are discovering all the
negatives, are important components of a complicated genetic
time.1 For example, because the DNA letters are
network.25,26 These interact with each other, the DNA, mRNA and
read in groups of three, it makes a huge difference
the proteins. Mattick proposes that the introns function as nodes,
which letter we start from. E.g. the sequence
linking points in the network. The introns provide many extra
GTTCAACGCTGAA can be read from the first
connections, enabling what in computer terminology would be
letter, GTT CAA CGC TGA A but a totally
called multi-tasking and parallel processing.In organisms, this
different protein will result from starting from the
network could control the order in which genes are switched on
second letter, TTC AAC GCT GAA
and off. This means that a tremendous variety of multicellular life
This means that DNA can be an even more
could be produced by rewiring the network. In contrast, early
compact information storage system. This partly
computers were like simple organisms, very cleverly designed
explains the surprising finding of The Human
27
[sic], but programmed for one task at a time. The older
Genome Project that there are only about 35,000
computers were very inflexible, requiring a complete redesign of
genes, when humans can manufacture over
the network to change anything. Likewise, single-celled
100,000 proteins.
organisms such as bacteria can also afford to be inflexible,
because they dont have to develop as many-celled creatures do.
Evolutionary interpretation
Mattick suggests that this new system somehow evolved (despite the irreducible complexity) and in turn enabled the
evolution of many complex living things from simple organisms. However, the same evidence is better interpreted from a
young age framework. This system can indeed enable multicellular organisms to develop from a simple cellbut this is the
fertilized egg. This makes more sense; the fertilized egg has all the programming in place for all the information for a
complex life-form to develop from an embryo.It is also an example of good design economy pointing to a single designer as
opposed to many. In contrast, the first simple cell to allegedly evolve the complex splicing machinery would have no introns

needing splicing.But Mattick may be partly right about diversification of life. Creationists also believe that life diversified
after the Flood. However, this diversification involved no new information. Some creationists have proposed that certain
parts of currently non-coding DNA could have enabled faster diversification, 28 and Matticks theory could provide still another
mechanism.
Hindering science
A severe critic of Matticks theory, Jean-Michel Claverie of CNRS,
the national research institute in Marseilles, France, said
something very revealing:
The circle of life
All living things have encyclopedic information
content, a recipe for all their complex machinery
and structures.This is stored and transmitted to
the next generation as a message on DNA
I dont think much of this work. In general, all these global ideas
letters, but the message is in the arrangement,
dont travel very far because they fail to take into account the
not the letters themselves.The message requires
most basic principle of biology: things arose by the additive
decoding and transmission machinery, which itself
addition of evolution of tiny subsystems, not by global design. It is
is part of the stored message.The choices of the
perfectly possible that one intron in one given gene might have
code and even the letters are optimal.Therefore,
evolvedby chancesome regulatory property. It is utterly
the genetic coding system is an example of
improbable that all genes might have acquired introns for the
irreducible complexity.
future property of regulating expression.
Two points to note:
This agrees that if the intron system really is an advanced operating system, it really would be irreducibly complex, because
evolution could not build it stepwise.It illustrates the role of materialistic assumptions behind evolution. Usually, atheists such
as Dawkins use evolution as proof for their faith; in reality, evolution is deduced from their assumption of materialism! E.g.
Richard Lewontin wrote, we have a prior commitment, a commitment to materialism. Moreover, that materialism is
absolute, for we cannot allow a Divine Foot in the door. 29 Scott Todd said, Even if all the data point to an intelligent
designer, such an hypothesis is excluded from science because it is not naturalistic.30
Similarly, while many use junk DNA as proof of evolution, Claverie is using the assumption of evolution as proof of its
junkiness! This is again a parallel with vestigial organs. In reality, evolution was used as a proof of their vestigiality, and
hindered research into their function. Claveries attitude could likewise hinder research into the networking capacity of noncoding DNA.
Summary
Junk DNA (or, rather, DNA that doesnt directly code for proteins) is not evidence for evolution. Rather, its alleged junkiness
is a deduction from the false assumption of evolution.
Just because no function is known, it doesnt mean there is no function.
Many uses have been found for this non-coding DNA.
There is good evidence that it has an essential role as part of an elaborate genetic network. This could have a crucial role in
the development of many-celled creatures from a single fertilized egg, and also in the post-Flood diversification (e.g. a
canine kind giving rise to dingoes, wolves, coyotes etc.).

The programs of life


Information is a measure of the complexity of the arrangement
of parts of a storage medium, and doesnt depend on what
parts are arranged. For instance, the printed page stores
information via the 26 letters of the alphabet, which are
arrangements of ink molecules on paper. But the information is
not contained in the letters themselves. Even a translation into
another language, even those with a different alphabet, need
not change the information, but simply the way it is presented.
However, a computer hard drive stores information in a totally
different wayan array of magnetic on or off patterns in a
ferrimagnetic disk, and again the information is in the patterns,
the arrangement, not the magnetic substance. Totally different
media can carry exactly the same information. An example is
this article youre readingthe information is exactly the same
as that on my computers hard drive, but my hard drive looks
vastly different from this page. In DNA, the information is stored
as sequences of four types of DNA bases, A,C,G and T. In one
sense, these could be called chemical letters because they
store information an analogous way to printed letters. 1 There are huge problems for evolutionists explaining how the letters
alone could come from a primordial soup.2 But even if this was solved, it would be as meaningless as getting a bowl of
alphabet soup.The letters must then link together, in the face of chemistry trying to break them apart. 3 Most importantly, the
letters must be arranged correctly to have any meaning for life.A group (codon) of 3 DNA letters codes for one protein letter
called an amino acid, and the conversion is called translation. Since even one mistake in a protein can be catastrophic, its
important to decode correctly. Think again about a written languageit is only useful if the reader is familiar with the
language. For example, a reader must know that the letter sequence c-a-t codes for a furry pet with retractable claws. But
consider the sequence g-i-f-tin English, it means a present; but in German, it means poison. Understandably, during the
postSeptember-11 anthrax scare, some German postal workers were very reluctant to handle packages marked Gift.

DO GENE DUPLICATION AND POLYPLOIDY PROVIDE A MECHANISM FOR EVOLUTION


Do new functions arise by gene duplication?
by Yingguang Liu and Dan Moran
Evolution requires a simple form of life to have morphed into increasingly complex organisms. Since the basis for biological
complexity is genetic complexity, some biologists propose that the complicated genomes in modern organisms arose from

one or a few genes in a common ancestor through duplication, with subsequent neofunctionalization through mutation and
natural selection. Here we examine the known mechanisms of gene duplication in the light of genomic complexity and postduplication events, and argue that: (1) gene duplications are aberrations of cell division processes and are more likely to
cause malformation or diseases rather than selective advantage; (2) duplicated genes are usually silenced and subjected to
degenerative mutations; (3) regulation of supposedly duplicated gene clusters and gene families is irreducibly complex, and
demands simultaneous development of fully functional multiple genes and switching networks, contrary to Darwinian
gradualism.
Figure
1. Equal (a) and
unequal (b) crossing-over.
Black and white colours represent homologous
chromosomes. Only one sister chromatid of each
chromosome is shown. After unequal crossing-over, one
chromosome gains an extra repetition of ABC genes
while the other chromosome loses DNA and becomes
shorter.
Natural selection merely modified, while redundancy
created.1It might be said that all of the new genes
arose from redundant copies of the pre-existed [sic]
genes.2Regardless of how the first gene came into
being, it is taught in textbooks that gene duplication is
the major force driving evolution.3,4 Gene duplications do
indeed add extra material to the genome, for example,
by aberrations in the division of chromosomes during
mitosis or meiosis, or by erroneous DNA replication.
Evolutionists argue that with subsequent mutation and natural selection, one or all copies of a duplicated gene eventually
encode new proteins (a process called neofunctionalization). Over millions of years, small simple genomes thus are
believed to have evolved into large, complex ones, giving rise to the multiplicity of life forms both living and extinct.One
frequently cited evidence for gene duplication comes from gene sequence analyses. Sequence comparisons have revealed
that some genes in modern organisms are more similar to each other than to other genes, and so they are classified into
families. Gene families are especially abundant in large genomes. Family members within a genome, the paralogs, are
believed to be products of gene duplications that have occurred in the past. Furthermore, functional domains of many
proteins encoded by apparently unrelated genes also bear structural and functional similarities. All of these are used as
evidence that the thousands of genes discovered so far (and those yet to be discovered) have evolved from a fewmaybe
oneancestral gene(s).5In this article we examine the major mechanisms proposed for gene duplication and evaluate their
likely contribution to the history of life in the light of recent evidences on post-duplication events and gene regulation
mechanisms.
Mechanisms of gene duplication
Polyploidy
Polyploidy refers to an increase in the number of sets of chromosomes per cell. Normally, most eukaryotic cells are diploid
(with two sets of chromosomes, 2n, one from the male parent and one from the female parent) while the sex cells are
haploid (with one set of chromosomes, 1n). A cell with 3n or more is polyploid. Polyploidy may arise naturally when a cell
fails to divide after DNA replication. If the cell with doubled genome is involved in the generation of sex cells (meiosis),
polyploid organisms may be subsequently produced upon fertilization. Alternatively, polyploidy can be artificially induced by
treating cells with chemicals such as colchicine.Since all genes are duplicated simultaneously in a polyploid cell, the
stoichiometric relationships between genetic products are preserved. For this reason, polyploidy is the least detrimental and
therefore the best surviving duplication mutation. 6 Polyploidy is seen in ferns, flowering plants and some lower animals. 7,8 It
is usually associated with hermaphroditism, parthenogenesis (mother producing young asexually), or species without
disparate sex chromosomes.8 In most dioecious
(possessing either male or female organs) animals and
humans, however, polyploid embryos typically suffer
generalized malformation and die during development.8 It
is not only sex determination per se (as was proposed by
Muller9 ), but more importantly, the delicate balancing
between homologous genes, that is disrupted in polyploid
individuals of higher animals. For instance, parental
imprinting (differences in the expression of maternal and
paternal genes) by DNA methylation may be disrupted as
the cell endeavours to silence extra chromosomes by
extensive methylation (see below under After
duplication).Autopolyploidy (all chromosome sets are from
the same species) can result in useful variation of
quantitative traits such as biomass, organ size, flowering time, drought tolerance, etc. But crucially, polyploid organisms
have an intrinsic mechanism to maintain genetic stability by silencing extra copies of genes (inhibiting their
expression).10 Silencing of homeologs (genes duplicated by polyploidy) is nonrandom, genetically programmed, and organspecific. It is a universal phenomenon seen in both plants and animals.7,11 Silencing of inferior alleles may be accountable for
the advantageous phenotypes of some polyploid species. Alternatively, superior alleles may take dominance even though
inferior ones are expressed simultaneously. In other words, there are no new genetic products, but old genes with altered
expression levels under the control of pre-existing programs.
Figure
2. (a) Xenopus globin
gene
clusters.50,51Grey:
tadpole;
Dark:
adult.
53
(b) Human globin gene clusters. Light grey: embryonic; Dark grey: fetal; Dark: fetal/adult () or adult only ( and ); White:
pseudogenes. Intergenic spacer sequences are omitted.Allopolyploidy results when the sets of chromosomes are derived
from two or more distinct, though related species. Unlike allodiploid hybrids such as the mule, allopolyploid organisms may
be fertile and give rise to new species. However, the hybrid species display merely a new combination of pre-existing
parental traits encoded by pre-existing genes. For example, some strains of the Triticale, synthetic allopolyploids from wheat
and rye, combine the high yield of wheat and the adaptability of rye. Another artificial hybrid species between the tall fescue
grass (Festuca arundinacea) and the short Italian ryegrass (Lolium multiflorum) shows quantitative traits (e.g. height) that
are intermediate between the parental species.12 The historical Raphanobrassica, hybrid between cabbage and radish, has

the roots of cabbages and leaves resembling that of a radish.In allopolyploids there may be interactions between genes
from different parents.13Disharmonious interactions between homeologous genes are thought to be the reason for most
cases of hybrid sterility in allodiploid animals. 14 In plants, neoallopolyploid genomes are often unstable, displaying sterility,
lethality, and phenotypic instability.15
Trisomy
In contrast to polyploidy, aneuploid cells (having a chromosome number that is not a multiple of the haploid) with one extra
chromosome (trisomy) have a severely imbalanced genome. Consequently, the organism will manifest defective
phenotypes. Aneuploidy is the result of failure to segregate a pair of homologous chromosomes during meiosis I or failure to
segregate sister chromatids during meiosis II (meiotic nondisjunction). When a sex cell with one extra chromosome unites
with a normal haploid sex cell, the zygote will be trisomic for that particular chromosome. Much knowledge about trisomy
has been accumulated clinically in humans. Autosomal trisomies have more dramatic effects than sex chromosome
trisomies. From the familiar Down syndrome (21 trisomy) to the less common Edward syndrome (18 trisomy) and Patau
syndrome (13 trisomy), autosomal trisomies always hinder the development of the central nervous system and manifest
mental retardation in live births. Developmental defects of other organs are also common. Trisomies involving other
autosomes are rare, and are seen only in spontaneous abortions and in vitro fertilizations.16Triplo-X females (karyotype
XXX) have only mild symptoms (tallness and menstrual irregularities). While men with Klinefelter syndrome (karyotype XXY)
show symptoms varying from infertility to severe structural deformation, XYY males are generally normal except for tallness
and acne.17 The reason that sex chromosome trisomies show less severe symptoms than autosomal trisomies may lie in the
fact that the X chromosome has a well established intrinsic inactivation mechanism to silence one homolog in the normal
woman; while the Y chromosome is small with few genes.
Unequal crossing-over
Crossing-over refers to the exchange of fragments between homologous chromosomes during the initial stages of meiosis.
Normally the exchange is equal as the genes line up based on sequence homology (synapsis). However, because of the
numerous sequence repetitions in eukaryotic chromosomes, the lining up may be inaccurate, causing deletion in one
chromosome and duplication in the other (figure 1). The mechanism is believed to be the major cause of deletions of red or
green pigment genes in the X chromosome resulting in colour blindness and deletions of globin genes causing various
forms of thalassemias.18,19 Repeated duplications have been associated with cancer.20 Duplication of a large segment of
chromosome 15 in human beings can cause mental retardation and other symptoms while smaller duplications are
asymptomatic or cause minor disorders such as panic attacks. Presumably, small segmental duplications are successfully
managed by the cells silencing programs. However,
segmental duplications within protein-coding sequences
may interrupt gene structure, causing frame-shift
mutations.21
Figure 3. Viral genes are expressed sequentially in a
highly regulated hierarchy. Each set of viral genes
encode transcription factors that turn on the next set of
genes by interacting with their corresponding
promoter/enhancer sequences.
Unequal crossing-over may have been the major
mechanism in altering the number of genes in repetitive
clusters. Gene clusters such as the human green
pigment genes and the human immunoglobulin heavy
chain genes that vary in numbers within the population
certainly manifest recent duplications.22,23 Clusters of
identical rRNA and histone genes also vary in number
within the species, presumably via unequal crossing-over.2428 Recently, it has been found that copy-number polymorphisms
of this kind are more abundant than previously realized. 29,30However, it is unlikely that gene clusters originated through
unequal crossing-over, because: (1) unequal crossing-over depends on pre-existing clustering. Although it may change the
number of repetitions within clusters, unequal crossing-over is not the ultimate cause of their being; (2) multiplicity of
identical genes in the clusters is often required for the cell to function properly. For instance, to meet the need of the cell to
produce large numbers of ribosomes in a short time, all cells contain multiple copies of rRNA genes in tandem arrays. In the
large oocyte (egg) of amphibians, the rRNA genes have to be further amplified approximately 2000-fold, resulting in about a
million copies per cell, to maintain the number of ribosomes at about 1012.31 Likewise, multiple histone genes are required for
the cell to synthesize histones rapidly during S phase of the cell cycle. But diversification and neofunctionalisation of these
identical copies is actually prevented, not promoted, by as yet unknown mechanisms.32
Transposition
Transposons are mobile genetic elements that can change their positions within the genome (the process is known as
transposition). While some transpositions occur by a cut and paste mechanism, others go by a copy and paste
mechanism, resulting in duplications. Unlike unequal crossing-over that produces tandem gene arrays, transpositions cause
duplications dispersed randomly throughout the genome. Transposons that duplicate via an RNA intermediate, known as
retrotransposons, are abundant in eukaryotic cells.Despite the abundance of transposons and retrotransposons in complex
genomes (e.g. 45% of the human genome), their function remains elusive. Traditionally, they have been considered as
selfish DNA because random insertion of transposons disrupts other genes, causing deleterious mutations. A classical
example is the Drosophila retrotransposon, the P element, which induces chromosomal breaks and causes
sterility.33 Consequently, it seems to be beneficial to the organism for transposition events to be suppressed. Indeed,
transposition is rare in the human cell. (Therefore, the vast majority of the human transposable elements must have been
present in the genome since ancient times.) However, in mice, Drosophila (fruit-fly), and Arabidopsis (plant), transposition is
still responsible for many mutations.34Recently, Peaston and associates discovered that retrotransposons are actively
transcribed in mouse oocytes and early embryos, providing alternative promoters and first exons to a subset of host
genes.35 This report suggests that transposons function as regulatory elements during early development. From this point of
view, transposition-induced mutation may be a side effect, instead of the intended function, of these repetitive genetic
elements.
After duplication

Figure 4. The major immediate early gene (mIE) of the


human cytomegalovirus is regulated by a network of viral
and cellular factors. IE1 and IE2 are products of the gene
through alternative splicing. IE1 acts as a positive feedback
signal to accelerate initial transcription, while IE2 provides a
negative feedback mechanism by binding to a cis-repression
signal (crs) later in infection. Viral proteins pp71 and ppUL35
interact with each other. pp71 also binds to a host cell
protein, hDaxx. IE1, IE2, the enhancer, pp71 and ppUL35
are all critical for effective viral replication.In order for
evolution to harness gene duplications to produce complex
genomes, it was originally proposed that one or more copies
of the duplicated gene will acquire advantageous mutations
(neofunctionalization).5,36,37 This was thought to be the only
mechanism to generate new genes from existing
ones.38 However, biologists are now becoming more and more convinced theoretically and empirically that most duplicated
gene copies undergo degenerative, rather than constructive, mutations, ending up in nonfunctionalization.As stated above,
the first event awaiting a duplicated gene is silencing. The best studied mechanism of silencing is through methylation of
cytosine bases in CG islands around promoters. 39 Subsequently, methylated cytosines tend to be spontaneously
deaminated and are substituted with thymine bases. 39,40 The phenomenon is known as CG depletion. Duplicated genes are
especially prone to CG depletion. 3941 Without selective constraint, silenced duplicates may also undergo other mutations.
Indeed, extensive genomic change can be detected within a few generations after synthetic polyploidy.42 Using silent
mutations (mutations that do not affect translated protein structures) to reflect time, Lynch and Conery calculated that
duplicated genes are lost exponentially with time and are nonfunctionalized by the time silent sites have diverged by only a
few percent.6
On the other hand, mutations in functioning gene family members are limited by purifying selection. In paralogous genes
that evolutionists believe were created by ancient duplication events, only about 5% of amino acid-changing mutations are
able to rise to fixation.6 There is a recent report that mutation rates in
gene family members are actually lower than in singletons (genes
without paralogs).43 In contrast, differences in amino acid sequences
between modern paralogous genes are generally large, e.g. 58%
between human and globins, 28% between human and
globins, 75% between human globin and myoglobin.Faced with this
dilemma, some evolutionists theorized that mutations leading to
neofunctionalization must have happened within a brief period of time
immediately after duplication (in spite of the fact that the frequent
mutations
observed
in
recent
duplicates
are
mostly
degenerative).43 Realizing the impossibility of neofunctionalization,
Lynch and Conery argued that gene duplication only passively
contributes to the generation of biodiversity by building up
reproductive barriers as duplicates are silenced stochastically.6 In
other words, gene duplication does not produce new genes because
silencing and subsequent degradation of duplicated genes cannot
provide new information.Meanwhile, several other models have been
proposed concerning the fate of duplicated genes. One theory states
that both the original and duplicated gene copies each lose only part
of
their
function
through
degenerative
mutations
(subfunctionalization). If each gene copy retains a different fraction of
its original function, the duplicates may complement each other and
function together as one gene. If the regulatory elements of
duplicated genes subfunctionalize (while the protein-coding regions
are somehow spared from degeneration), they may be expressed at
different stages/tissues. The theory is known as duplicationdegeneration-complementation (DDC) model.4446 The DDC model
may allow partial preservation of duplicated genes, but it fails to
explain the evolution of new genes or new regulatory elements. (Let
alone the complicated mechanisms of tissue/organ-specific regulation. See below under Gene Regulation).
Figure 5. Proposed initial coagulation network (a)and proposed intermediate coagulation network after gene
duplication (b).75 Line arrows: activation; block arrows: conversion.
Recently, another model, called epigenetic complementation (EC), has been proposed by Rodin and colleagues. 47,48 The
theory states that if a gene is copied into a different position within the genome, it may be put under the control of a different
regulatory environment and therefore expressed in a different tissue or stage of life. Epigenetic silencing mechanisms (such
as cytosine methylation) work in such a way that one copy is silenced whenever or wherever the other copy is expressed.
According to this model, there is no need for mutation to alter the regulatory elements of the duplicates in order to achieve
complementation.The EC model does not explain the existence of clustered gene families with divergedfunctions for each
member. For example, the linked and globin genes in Xenopus laevis are expressed at different (tadpole and adult)
stages of life (figure 2).4951 But their temporal regulation is difficult to explain with differing epigenetic environments, since
the adult genes are sandwiched between tadpole genes. Rather, it can be better accounted for by differences in their
regulatory sequences that respond to stage-specific transcription factors.52,53Similarly, members of the clustered human
globin gene family are expressed in two stages (embryonic and adult) and the clustered globin gene family are expressed
in three stages (embryonic, fetal, and adult) (figure 2). Again, temporal regulation (especially silencing) is accomplished
genetically, rather than epigenetically, via distinct regulatory elements associated with the genes. 5456 Furthermore, there is
no change in regulation of the globin genes after the supposed separation of and genes onto different chromosomes in
mammals and birds. Both the gene of the family and the gene of the family are expressed during the embryonic
stage in human development, to form the 2 2 tetramer, even though they are on different chromosomes; while the and
genes are expressed simultaneously in adults.
Like the DDC model, the EC model still depends on mutation and natural selection for neofunctionalization.

Genome complexity
If the evolution-by-gene-duplication theory is correct then DNA content and gene number should increase proportionately
with organism complexity. However, this is not the case (Table 1). For example, the unicellular algae, Euglena, has a bigger
genome than some vertebrate animals such as zebrafish and chicken. Amphibians may have genomes larger than some
birds and mammals. The plant, Zea mays (corn), has more genomic DNA than does the human species. This phenomenon,
known as the C-value paradox, demonstrates that the amount of genomic DNA is certainly not a good index for biological
complexity.
Table 1. Genome characteristics of selected species.5759
Table 1 also shows that the number of genes within a
genome does not increase in proportion to the amount
of genomic DNA. As a general rule, larger genomes
have sparser genes. Prokaryotic genomes are much
more compact than eukaryotic genomes, e.g. 89%
of Haemophilusgenome consists of protein-coding
genes as compared to 11.5% in the human genome.
Consequently, the number of genes is an even poorer
indicator of genome complexity than haploid DNA
content. For example, human beings with 1014 cells
have a total gene number comparable to that
of Caenorhabditis elegans, which has only 959 somatic
cells. Likewise, Drosophila, with its 50,000 cells, has
only twice as many genes as the single-celled bakers
yeast.In other words, simpler organisms already have
DNA content and gene numbers comparable to that of
advanced species. Further gene duplication (and
mutation) will not help them climb up Darwins tree of
life.
Gene regulation
Of course, it is not only the number of cells, but also the
types of cells in an organism, that indicates complexity.
On the genetic level, differentiation into various cell
types is a result of the spatial and temporal regulation
of genes. Therefore, the genes for transcription factors,
which act as molecular switches in the genome, have
much to do with genetic complexity. Prokaryotic genes
are generally regulated as a group (polycistronic, i.e.
several genes are controlled by one transcription factor)
while eukaryotic genes are regulated individually
(monocistronic).Szathmary and associates proposed a
mathematical formula to calculate genome complexity
in terms of the interactions between genes (usually
through their encoded protein products including
transcription factors).60 He borrowed a parameter,
connectivity (C), from ecology which uses the term to describe trophic interactions in food webs:
C = 2 L/[N(N-1)]
L refers to the number of interactions among genes (it originally meant trophic links in ecology), while N refers to the
number of genes in a genome (originally the number of species in an ecosystem). C is equal to the number of actual
interactions out of all possible interactions.The most important aspect of genetic interaction that determines the value of C in
Szathmarys equation is the number of levels constituting a regulation hierarchy. In ecosystems, adding trophic levels
generates more connectivity than increasing the number of species. Like a food chain, a gene regulation pathway can have
multiple levels of interactions, whereby upstream transcription factors regulate downstream transcription factors.The concept
of irreducible complexity61 applies to gene regulation systems. An irreducibly complex system is one in which all the
essential parts must be present at the same time, and thus could not have been built up slowly over millions of years in a
step-wise Darwinian fashion. In order for a gene regulation unit to function, many genetic elements, including trans-acting
elements that encode the transcription factors, cis-acting elements that respond to the transcription factors, and the
structural genes, have to be present simultaneously. Although there are examples of functional overlaps between pathways,
multiple unique elements are usually required for each pathway. Knocking out any of the elements will frequently result in
dysfunction, even loss of life.In the simplest case, many viruses have three sets of genes regulated as a cascade (figure 3).
The immediate-early () genes have promoter elements (binding sites for RNA polymerase or some transcription factors)
similar to those of the host cell and are transcribed by a host cell RNA polymerase. The products of immediate-early genes
are mostly transcription factors that interact with the cis-acting regulatory elements (promoter/enhancer) of early () genes.
The early gene products, in their turn, activate the late () genes, by interacting with their cis-acting elements. The early
genes also encode enzymes to replicate the viral DNA, so that the late genes are multiplied before their expression, allowing
for rapid accumulation of late gene products toward the end of infection. This scenario enables the virus to divert the
resources of the host cell to the production of new viruses effectively.A specific example of a regulation network is the major
immediate-early gene (mIE) of the human cytomegalovirus (HCMV) which encodes two major products, IE1 and IE2, by
alternative splicing (figure 4). The two proteins act synergistically to activate the genes. Adjacent to the gene is a 1.1-Kb
cis-regulatory sequence called the major immediate-early enhancer-promoter (MIEP), which contains concentrated binding
sites for multiple cellular transcription factors. One of the products of the mIE gene, IE1, functions as an autoregulatory
trans-activator that recruits a cellular protein, NF-kB, which binds to the enhancer and activates transcription. The IE2
product of the gene, on the other hand, represses the gene by binding to a cis-repression sequence (crs, see figure
4).62 The virus also carries several other viral proteins into the host cell for effective transcription of mIE. Among these are
ppUL35 and pp71, which interact with each other in the infected cell. 63,64Meanwhile, pp71 interacts with a cellular protein,
hDaxx, which is required for mIE transcription. 65Because the viral genome is relatively small and easy to manipulate, HCMV
provides a good model in which to study the effects of knocking out a gene from the genome. Deletion of the sequences that
encode IE2, or the proximal portion of the enhancer, from the HCMV genome completely inactivates the virus. 66,67 Deletion
of any of the genes that encode IE1, pp71, or ppUL35 renders the virus incapable of replicationin vitro at low multiplicity of

infection (MOI), which resembles natural human infection.6870 All these regulatory factors have to be present and functional
at the same time for HCMV to survive (if it cannot replicate it becomes extinct)..Virus genomes are far simpler in the
complexity of their regulation than prokaryotes and eukaryotes, so it follows that their regulatory systems are also irreducibly
complex. For evolution to have occurred via gene duplication, both the gene and its cis-regulatory elements have to be
duplicated simultaneously. Furthermore, since gene family members often have distinctly different expression patterns, both
the gene and the cis-regulatory elements have to mutate concertedly in order to confer a selective advantage to the
organism. For example, the and globins have to acquire higher oxygen affinity than the and globins in order for the
embryonic hemoglobin tetramer 2 2 to extract oxygen from the maternal 2 2tetramers. Meanwhile, the regulatory elements
of the embryonic and adult globins have to develop binding affinity for the transcription factors expressed during their
respective developmental stages. Most importantly, a delicate globin switching mechanism, known to involve numerous
trans-acting factors and multiple levels of regulation, has to be developed. In the case of the human -like globin switching,
which is the best understood, some of these factors are universal, while others are erythroid-specific. 5456,71Deletion of the
regulatory elements or a member of the gene family will result in thalassemia.Another example of clustered gene families
whose expression follows a temporal pattern is the immunoglobulin heavy chain family produced by B lymphocytes. There
are five classes and each has properties that cannot be replaced by others. All B lymphocytes start by secreting IgM and
switch to IgG, IgE, or IgA within a few days via a complex switching mechanism. 7274 The most important aspect of class
switch is targeting of DNA recombination enzymes to specific sites. Gene duplication theory would require coordinated
mutations in the structural genes and the cis-regulatory elements, and a unique recombination mechanism different from the
known mechanisms.Michael Behe used the blood clotting factors to illustrate irreducible complexity. 61 Dozens of proteins
activate or inhibit each other in the blood coagulation and subsequent clot-dissolving pathways. Accidental deletion of
factors leads to diseases such as hemophilia. Since many factors share similar functional domains, they are thought to have
evolved by ancient gene duplication events, including polyploidy during the Cambrian explosion. 7577 However, these
duplications have to be followed by coordinated mutations that work just right. A proposed functional intermediate blood
clotting pathway75 in figure 5 shows how much coordinated change is required.
Conclusion
The majority of gene duplications are meiotic or mitotic aberrations, resulting in malformations or diseases. Plants can
tolerate duplications, especially polyploidy, better than animals due to differences in their styles of reproduction. To maintain
genomic stability, all cells have built-in mechanisms to silence duplicated genes, after which they become subject to
degenerative mutations.Clusters of identical genes need complicated mechanisms to prevent diversification in order for
them to work in unison. Likewise, gene families whose members perform distinct functions are maintained by purifying
selection. While duplication may alter the number of members in gene families, it is not their ultimate origin. Current models
explaining the preservation and neofunctionalization of duplicated genes encounter obstacles one way or the other.Evolution
by gene duplication predicts a proportional increase in genome size with organism complexity but this is contradicted by the
evidence. It is not genome size but intergenic regulatory sequences and gene regulation hierarchies that determine
complexity. Gene regulation networks are irreducibly complex and constitute an insurmountable barrier for the theory.
Does gene duplication provide the engine for evolution?
by Jerry Bergman
Proponents of the gene-duplication hypothesis of evolution argue that a mutation can cause the duplication of a gene that
allows one copy of the gene to mutate and evolve to perform a novel function, while allowing the other copy of the gene to
continue to perform the original genes function. Gene duplication is now widely believed by Darwinists to be the main
source of all new genes. A review of the evidence shows that there are numerous problems and contradictions in this theory
and the empirical evidence indicates that gene duplication has a role in variation within kinds but not in evolution. Darwinists
therefore have nothing more to go on than to depend heavily upon extrapolations from gene similaritiesa circular
argument founded upon the assumption of evolution, and yet another example of evolutionary story telling.
The adverse effects of gene duplication, such as Downs syndrome, are well known. Although
the methodology is available, evidence of functionally useful genes as a result of duplication is
yet to be documented.
One of biologys greatest mysteries is how an organism as simple as a one-celled bacterium
could give rise to something as complicated as a human. 1 How life evolved from a few
primordial genes to the tens of thousands of genes in higher organisms is still a major issue in
Darwinism. The current primary hypothesis is that it occurred via gene duplication. 26 Shanks
concluded that duplication is the way in which organisms acquire new genes. They do not
appear by magic; they appear as the result of duplication.7 Ernst Mayr, one of the most
respected Darwinists of the 20 th century, agrees saying,Such a new gene is called
a paralogous gene. At first, it will have the same function as its sister gene. However, it will
usually evolve by having its own mutations and in due time it may acquire functions that differ
from those of its sister gene. The original gene, however, will also evolve, and such direct
descendants of the original gene are called orthologous genes.8Ohno goes further, concluding
that gene duplication is the only means by which a new gene can arise (emphasis mine), a view that Li concludes is largely
valid.9Furthermore, Ohno argues that not just genes but whole genomes have been duplicated in the past, causing great
leaps in evolutionsuch as the transition from invertebrates to vertebrates[which] could occur only if whole genomes
were duplicated. Kellis et al., agree that whole-genome duplication followed by massive gene loss and specialization has
long been postulated as a powerful mechanism of evolutionary innovation.10,11Evolution by gene duplication is a form of
exaptation.12-14 Exaptation is the putative evolutionary process by which a structure that evolved for some other purpose is
reassigned to its current role.
Evidence for gene duplication
Gene duplication does occur. For example, chromosomal recombination can result in the loss of a gene on one
chromosome and the gain of an extra copy on the sister chromosome. Gene duplication can involve not only whole genes,
but also parts of genes, several genes, parts of a chromosome, or even entire chromosomes.All of these conditions are well
known because they are important causes of disease (including cancer) and can even cause death. Eakin and Behringer
conclude:Spontaneous duplication of the mammalian genome occurs in approximately 1% of fertilizations. Although one or
more whole genome duplications are believed to have influenced vertebrate evolution, polyploidy of contemporary mammals
is generally incompatible with normal development and function of all but a few tissues. Most often, divergence of ploidy
from the diploid (2n) norm results in a disease state. 15Li has noted that polyploidy (having more chromosomes than the

usual diploid number) is likely to cause a severe imbalance in gene product, and their chance of being incorporated into the
population is small.16 He concludes that for both vertebrates and invertebrates only when single genes, or a few genes, are
duplicated is the possibility to evolve new genes created.The gene-duplication idea has been researched for more than 30
years. Although first discussed by Haldane in 1932 and Miller in 1935, it was not discussed in detail until 1970 in Susumu
Ohnos book, Evolution by Gene Duplication.17 When Ohno proposed the idea many of his colleagues then considered his
proposal outrageous.10 Gene duplication could not be evaluated experimentally, though, until the development of molecular
biology techniques. Even now the primary evidence for gene duplication having a role in evolution must be inferred from
gene similarity (i.e. an argument from homology). In the words of Hurles:The primary evidence that duplication has played a
vital role in the evolution of new gene functions is the widespread existence of gene families. Members of a gene family that
share a common ancestor as a result of a duplication event are denoted as being paralogous, distinguishing them from
orthologous genes in different genomes, which share a common ancestor as a result of a speciation event. Paralogous
genes can often be found clustered within a genome, although dispersed paralogues, often with more diverse functions, are
also common.18Because two genes are similar, though, does not prove that one was produced as a result of duplication.The
ideal method to prove the origin of functionally useful genes as a result of gene duplication would be to use the same
techniques that have been used to prove the adverse effects of gene duplication. A child with an abnormality such as
Downs syndrome (trisomy 21) is studied for genetic differences compared to the population as a whole and, especially,
compared to his or her parents. If neither parent has a trisomy 21, and the cause, an extra chromosome 21, is determined to
be a result of non-disjunction, it can be concluded that gene duplication has caused the abnormality. In the opposite case, if
a child that has an exceptional ability is determined to have a gene not found in his parents and genetic studies of the family
genetic history lend evidence of gene duplication and mutations in the childs genetic inheritance, this is powerful evidence
for gene duplication having produced the advantageous trait. This method can be used to trace the process for several
generations so as to determine cases that involve more than one mutation. So far, however, no one seems to have done
this research, or if they have, the results have not supported the gene duplication theory and were not published
Chromosome doubling in plants
Chromosome abnormalities, such as triploidy, are usually harmful in most animals, especially higher animals. Conversely,
polyploidy in plants is very common and can, in many circumstances, benefit the plant, although few researchers argue that
it plays a significant role in large scale evolution.19 Some evidence exists that polyploidy is a mechanism that produces
variety within created kinds, similar to the effects of crossing over that occurs during meiosis. The specific effects of
polyploidy depend on the environment and the plant. Polyploidy increases cell size, causing a reduction of the surface-tovolume ratio that can reduce the rate of some cell functions, including metabolism and growth. Conversely, some polyploids
are more tolerant to drought and nutrient-deficient soils. In addition, some polyploids have greater resistance to pests and
pathogens.20 However, in all of these cases, a fitness cost exists, meaning that in many environments polyploidy is a
disadvantage.Much more research is needed for a proper understanding of plant polyploidy in order to determine under
what specific conditions it is harmful and, conversely, under what specific conditions it is beneficial. As its biological function
seems to be primarily to produce variety, it is not normally lethal (or even regularly lethal), as are most examples of animal
polyploidy.Some invertebrates can tolerate polyploidy. Male bees, for example, have a haploid number of chromosomes and
females a diploid number. This does not cause the females to evolve faster, however, as the gene duplication theory might
predict. In the rare cases of polyploidy in vertebrates, most examples involve unusual species that demonstrate a
parthenogenetic mode of reproduction, lack heteromorphic sex chromosomes or have an environmentally induced sexdetermining system.21Artificial gene duplication for experimental purposes has been developed in mice, but it has not
provided any evidence for evolution because it is lethal:The production of tetraploid (4n) embryos has become a common
experimental manipulation in the mouse. Although development of tetraploid mice has generally not been observed beyond
mid-gestation [i.e. it is fatal], tetraploid:diploid (4n:2n) chimeras are widely used as a method for rescuing extra-embryonic
defects [i.e. a genetic defect that is normally fatal can be artificially made to survive in the chimera].22
Problems with the gene-duplication theory
The statistical challenge
Statistical evaluation of the predictions of the gene duplication theory does not appear to be favourable to it. For example,
the theory predicts a positive correlation between organismal complexity and gene number, genome size and/or
chromosome number. All of these predictions are contradicted by the evidence.In regard to gene number, humans have
about 25,000 genes,23 while rice has 50,000.24 In terms of genome size, the largest known genome does not occur in man,
but rather in a bacterium! Epulopiscium fishelsoni carries 25 times as much DNA as a human cell, and one of its genes has
been duplicated 85,000 times yet it is still a bacterium. 25In terms of chromosome number, the descending rank order of
diploid numbers for a selection of animals is as follows: Cambarus clarkii (a crayfish) 200, dog 78, chicken 78, human
46, Xenopus laevis (South African clawed frog) 36, Drosophila melanogaster (fruit fly) 8, Myrmecia pilosula (an ant) 2. These
results do not fit the predictions of the gene duplication theoryperhaps they imply that flying on your own wings or in
airplanes (fruit fly and human, respectively) needs less chromosomal input than lying around in swamps (frog and crayfish,
respectively).Another statistical challenge has been noted by evolutionist genetics professor Steve Jones who concluded
that an inverse relationship exists between the amount of DNA on one hand, and, on the other, both lethargic lifestyles and
the speed at which organisms can evolve: the more DNA, the slower it is able to evolve. It takes a great deal of energy and
resources to duplicate DNA, and the less of it an organism has, the faster it can reproduce (and the more efficient it is).
Jones notes that all weeds have small genomes, while more established plants are packed with DNA and can take a month
to make a single egg cell.26 Another example Jones cites is lungfish, which are stuffed with DNA (most of it with no
apparent function) and their evolution has stalled altogether bacteria are speedy and have no excess genetic material,
while salamanders, torpid as they are, are filled with DNA.26 In his view, natural selection selects against gene duplication.
The evo-devo challenge
Male bees have a haploid number of chromosomes whereas female bees
are diploid. This however, does not cause females to evolve faster, as
predicted by gene duplication theory.An important alternative to the
Darwinists exclusive focus on genes is emerging in evo-devo
(evolutionary development theory). They claim (with a great deal of
experimental evidence behind them) that the content of the genome is not
the primary determinant of identity; it is the epigenetic control system that
decides how the genes are used. A surprisingly small number of genes
tool kit genesare the primary components for building all animals,
and these genes emerged before the Cambrian explosion [emphasis
added].27 That means the essential genes have not changed significantly
over time, contradicting the central claim of neo-Darwinism. The function

of these genes can be compared to keys on a piano keyboard. The kind of music that is played (i.e. whether an embryo
turns into a man or a mouse) is determined, not so much by the keys themselves, but by the player who strikes the keys and
by the musical score that the player follows. If this is true, then arguments about gene duplication are irrelevant because
evolution occurs somewhere else (i.e. in the playing and in musical score).
The functional challenge
Because whole genome duplication in animals is usually lethal, Ohno originally concluded that only two whole genome
duplications had occurred throughout history; later he argued that a total of three had occurred. 28But Darwinists have
admitted that even the process of single gene duplication is poorly understood. Lynch and Conery note that, although gene
duplication has generally been viewed as a necessary source of material for the origin of evolutionary novelties, the rates of
origin, loss, and preservation of gene duplicates are not well understood. 29Behe and Snoke have pointed out that
evolutionists must assume that multiple mutation events are required to produce a new functional gene, and each of the
mutations must not be deleted until the gene has evolved to the degree that positive selection occurs.30 Meanwhile however,
a duplicated gene may produce either defective proteins that can be toxic or fatal, or, at the least, will tax the cells resources
and waste amino acids and energy. Because of this, natural selection acts ongene duplications, most often by deleting them
from the gene pool or by degrading them into non-functional pseudogenes. This is because fully functional duplicated
genes, in combination with the corresponding parent gene, produce abnormally abundant quantities of transcripts. This
over-expression often alters the fragile molecular balance of gene products on a cellular level, ultimately resulting in
deleterious phenotypic consequences.31Zhang, in a study of gene duplication, concluded that many duplicated genes
become degenerate, nonfunctional pseudogenes and, in only rare cases, a new function may evolve, as is believed to
have occurred in the douc langur monkey.32 These langurs have two copies of an RNA-degrading enzyme gene, while other
monkeys have only one copy. The extra copy aids the langur in digesting its specialized diet of leaves. Pseudogenes are
considered by some to be damaged genes, and by others a source of new genes,33 and recent work suggests that they may
be functional.10Yet another functional problem, noted by geneticist Manfred Schartl, is thatit would be very difficult for the
first tetraploid fishthose with four rather than the usual two copies of each chromosometo engage in sexual
reproduction.28
Although the globin gene family is the most commonly sited example of evolution by gene duplication, there is no evidence
to support this. Moreover, it is known that the various globin variants
of hemoglobin are designed to meet the differing demands for oxygen
metabolism during the various stages of embryological, fetal and
neonatal (and later) development.Another putative mechanism
is partial duplication, which results in a gene mosaic. This condition,
called a patchwork gene, often consists of several different regions
that are similar to other genes. Likewise, because of this similarity it is
assumed that the gene segments haphazardly combined until a rare
combination occurred that was beneficial, so that this gene was
selected. The most common hypothetical example is the LDL (LowDensity Lipoprotein) receptor. This relationship is hypothesized
because part of the LDL receptor is similar to the epidermal growth
factor hormone.Some theorize that this part of the gene evolved from
a partial duplication of the epidermal growth factor gene. But how was
the function of the LDL receptor maintained until this gene evolved?
Without functional LDL receptors, a cell cannot effectively take in
lipids, causing not only a supply deficiency in the cell, but also excess
LDL in the blood, resulting in vascular problems from stroke, to
embolisms, to heart disease. An example is hypercholesterolemia, a disease caused by defective lipid receptors. The
victims often have strokes and heart attacks before their teens, even if on a low-fat diet.
Gene Families?
A group of genes that is closely related and theorized to have evolved by successive duplication is called a gene family, and
an even larger group of genes that has structural similarities is titled a gene superfamily. No evidence of ancient genes
exists to empirically document the theorized evolution of any gene family or superfamily. Instead, a gene family is
determined merely by making comparisons among existing genes, noting those that are similar.But any arbitrary collection
of itemswords, ideas, or physical objectscan be grouped together to form families and super families, and no
exception exists for genes. An automobile and a lawnmower, for example, both belong to the four-wheeled machine family
but this does not necessarily imply common ancestry. We are therefore not compelled to believe that because some genes
have similar components that they evolved from a common ancestor.The first genes speculated to have evolved as a result
of gene duplication were therefore the alpha and beta hemoglobin chains used to carry oxygen in erythrocytes. 9 The globin
gene family is now the most commonly cited example of evolution by gene duplication. Myoglobin, a monomeric protein
found mainly in muscle tissue where it serves as an intracellular storage site for oxygen, is hypothesized to have evolved
into the tetrameric hemoglobin. Hemoglobin consists of two dimers, each one containing an alpha globin and a non-alpha
globin. The ancestral non-alpha globin, called beta globin, supposedly gave rise to modern gamma, delta, and epsilon globin
genes, and duplication of the alpha globin produced the epsilon and zeta globin genes. These globin variants are all used
during different stages of embryological, fetal and neonatal (and later) development. The alpha, zeta and epsilon globin
chains are produced in the early embryo and, during about the third month, the latter chains are replaced by the gamma
chain and then later by the adult beta or delta chains at birth.But all of this supposed evolution is based on nothing more
than speculation. In real life, the multiple uses of globin molecules in oxygen metabolism is no more an indicator of blind
replication than is the multiple use of cogwheels in a clockwork mechanism. Just as each cogwheel is specifically structured
and located to do a particular job, is functionally integrated with its fellows to optimally do that job, and is precisely regulated
to do it at the right time, so are the globin molecules designed to meet the differing demands for oxygen metabolism during
the development of the organism. The site of hemoglobin synthesis also changes from yolk sac to liver to bone marrow
during development, so differing environments and transport systems are also involved. Disruption to hemoglobin synthesis
leads to a wide range of diseases, and neo-Darwinists have been unable to explain how development could have
proceeded successfully before the complex system was all in place.Another example of duplication is believed to be the
evolution of the Human Major Histocompatibility Complex (MHC). But further study has likewise disputed some of these
claims:Regions that are paralogous to the MHC on chromosomes 1, 9, and 19 have been proposed to result from ancient
chromosomal duplications, although this has been disputed based on phylogenetic analysis.34
The gene duplication rate problem

Is gene duplication common enough to provide an adequate source for evolution? The rate can be as high as 17% in some
bacteria to 65% in the plant Arabidopsis but these are extreme examples.32 One empirical study by Lynch and Conery used
steady-state demographic techniques to accurately determine the number of duplicate genes. This study evaluated seven
completely sequenced genomes. From their research, they estimated that the average rate of duplication of a eukaryotic
gene to be on the order of 0.01/gene/million years, which is of the same order of magnitude as the mutation rate per
nucleotide site. The researchers concluded from their study that the origin of a new function appears to be a very rare fate
for a duplicate gene (emphasis mine).35Another study by Behe and Snoke 30 evaluated gene duplication by using
mathematical modeling and published gene-duplication data. Their model assumes the simplest route to produce a new
gene function: a duplicated gene that is free from purifying selection and subject to point mutation, and the minimum number
of biologically relevant modifications required to create a novel function. Because the minimum number of changes
necessary for most new gene functions is greater than one altered amino acid, and the number of changes needed in DNA
for each altered amino acid varies between one and three, definitive estimates are difficult to obtain. Nonetheless, a
reasonable estimate can be obtained in attempting to evaluate the validity of the duplication-mutation model. Behe and
Snoke concluded that, even given liberalestimates, fixation of features requiring changes in multiple residues requires both
population sizes and numbers of generations so large that they seem prohibitive. They concluded that gene duplication,
coupled with point mutations, does not appear to be a promising mechanism for producing new proteins that require more
than a single point mutation.Standish concludes that the Behe-Snoke paper does not exclude the possibility thatmore
complex mechanisms involving larger mutations and/or selection of intermediate states acting on duplicated genes may
serve as engines of new gene production. The problem is that these other mechanisms appear to be even more complex
and thus less probable than the conceptually simple duplication-point mutation model Behe and Snoke examined. While
their paper suggests that other potential mechanisms should be rigorously examined before discarding gene duplication and
modification as a potential mechanism of evolution, it clearly demonstrates that even the most superficially reasonable
sounding Darwinian mechanisms should be carefully evaluated before they are accepted as truly reasonable [emphases
added].36This study (and others) indicate(s) that gene duplication does not appear to provide Darwinists with a significant
source of new genes. Although many, if not most, genes are assumed to have arisen by gene duplication, a clear lack of
evidence exists for gene duplication as the source of specific genes.12 Another major problem is distinguishing adaptations
from exaptations. In others words, how do we know a gene resulted from duplication, and not by some other means such as
independent evolution?37
The indefinite regress problem
Gene duplication is a supposed method of exaptationthe takeover of an existing function to serve another purpose. Gould
believed exaptation was so important that the defining notion of quirky functional shift [i.e. exaptation] might almost be
equated with evolutionary change itself in textbook parlance, the origin of evolutionary novelites. 38 But this kind of
argument is fundamentally flawed. If all evolutionary novelties arise from something else that was itself exapted from
something else, then an indefinite regress results. The problem with an indefinite regress is that explanation A depends on
an earlier explanation B that you have not given, and explanation B itself depends upon an earlier explanation C that you
likewise have not given. While you may appear to be explaining something, there is no actual explanatory contentit is no
explanation at all.
The conservation problem
Multiple information conservation mechanisms are at work in all living
organisms, ranging from natural selection eliminating the unfit, through
various reproductive and chromosomal controls, to error correction
routines and DNA repair mechanisms, including (it appears) restoration
from non-DNA sources. As a result, many, if not most, genes are
evolutionarily conserved, meaning that they are very similar in many
unrelated organisms, both simple and complex, modern and ancient.
Many genes in the assumed earliest forms of life are very similar to
those in the most advanced forms. These facts argue strongly against
gene duplication as a mechanism of evolution, because they indicate
that most genes were optimally functional from the beginning.
Conclusions
The proposition that large scale evolution has occurred via gene
duplication is contradicted by numerous lines of evidence. Little
evidence currently exists to support the belief that gene duplication is a significant source of new genes, supporting one
University of South Carolina molecular evolutionists conclusion that scientists can not prove that [genome duplication]
didnt happen, but [if it did], it didnt have a major impact. For me, its a dead issue. 10It also is clear that the evidence for
gene duplication at present is totally inferential, and not empirical or experimental. Chromosome duplication can produce
useable varietybut only within what are most likely created kindsin plants and invertebrates, and single gene duplication
appears to do likewise in rare cases in vertebrates, but otherwise gene duplication generally causes disease and deformity.
The existing experimental evidence does not support gene duplication as a source of new genes for at least populations of
fewer than one billion.30 According to Hughes, Everything weve looked at [fails to] support the hypothesis. 39 Darwinists
promote gene duplication as an important means of evolution, not because of the evidence, but because they see no other
viable mechanism to produce the required large number of new functional genes to turn a microbe into a microbiologist. In
other words, evolution by gene-duplication is yet another example of just-so story-telling.
Dawkins and the origin of genetic information
Is it legitimate to demand of evolutionists an explanation for the origin of genetic information?
Some amoebae have a huge amount of DNA in each cell, much more than humans. Does that mean they are more
biologically complex? Hardly. It just shows we have a lot to learn yet.The leading antichristian and eugenicist Clinton R.
Dawkins is not without his defenders. One questions us on the issue of genetic information, a question Dawkins had
immense difficulties with. Don Batten responds with instructive points about the latest discoveries about information and
meta-information (information about information), as well as pointing out the confusion between amount of DNA and amount
of information it holds.
I would like to respond to the Skeptics choke on frog article regarding, among others, Richard Dawkins.
The idea that biological complexity equates to genetic complexity is completely wrong. Charging evolutionists to describe a
mutation which would add information to an organisms genome is an irrelevant question. In fact, there ARE actually such

mutations, which will increase the volume of a genome and even add genes (they are due to the activity of some viruses
and of translocons, and to chromosomal recombination).
However, the evolution of organisms from simple to complex has nothing to do with how many genes an organism has or
how large its genome is. In science, we even have a name for the fact that the number of a species genes has no relation to
the relative complexity of that organism: it is called the C value paradox. As an example, humans have approximately
20,000 to 25,000 genes. Rice has somewhere around 37,000 genes. If an organisms evolutionary complexity actually had
anything to do with how large its genome is, or even with how many protein-coding genes it contains, then rice would be the
considered the paragon of evolutionary AND creationary mechanisms.
Nicole
USA
Dear Nicole,
Thanks for your query, which does afford us the opportunity to correct a misconception. You are correct: the complexity of an
organism is not to be measured simply by counting the number of protein coding genes. Life is far more complex than that. I
dont think we have ever suggested that the status of an organism is to be measured by the number of such genes and that
therefore humans would necessarily have the most.However, the evolution of a microbe into a complex organism such as
rice or a human does require the addition of new genes. For example, the simplest single-celled organism has about 500
protein-coding genes and humans have over 20,000. So, if we began as microbes in some primordial soup, as evolutionary
theory posits, then a lot of new genes had to be added by mutationsthe only game in town for the evolutionist. There have
to be a lot of mutations that add such new genes, not just twiddle with the existing ones. For example, the genes that make
nerves and all the enzymes that enable nerves to operate are absent from microbes. They have to be created de novo if we
evolved from them. There are many gene families in humans that are completely missing from microbes, so there has to be
a viable mechanism for adding this genetic information if evolution is to be feasible. And mutations (accidental changes) of
one form or other are the only mechanism for Darwinism.So the question to Richard Dawkins was a legitimate one. Indeed,
Dawkins himself says that it is the information in living things that evolution has to explain. He candidly admits this in the rest
of the interview that is included on the documentary. In The Blind Watchmaker Dawkins clearly outlines the problem of
information in living things. Of course, being a true believer in evolution by necessity of his atheism, he has to believe that
mutations and natural selection can do the job and he spends the rest of the book with various story-telling ploys to make a
case for the adequacy of evolution to create the required information.Is rice more complex than a human because it has
more protein-coding genes? It might be, because it is an autotroph, meaning that it is capable of creating all its own energyrich biochemical building blocks using the energy from sunlight (in photosynthesis). In contrast, humans are heterotrophs,
ultimately depending on plants to live. We are incapable of making many of the complex biochemicals needed for life; we
get them from plants. There are many genes involved in photosynthesis and the biosynthesis of the essential amino acids,
for example, that we do not have (the origin of photosynthesis is another conundrum for evolutionistssee Shining light on
the evolution of photosynthesis and Green power: Gods solar power plants amaze chemists). But we also have many
genes that rice does not have: ones for making muscle fibres, nerves, hemoglobin, etc. So rice and humans are not really
comparable; its like comparing apples with jellyfish.Comparison of rice with humans is a red herring. No one has proposed
that humans evolved from rice, or vice versa. Evolutionists (such as Dawkins) readily admit that the evolution of humans
(and rice) involved the addition of a lot of new genetic information to simpler organisms that supposedly made themselves in
the beginning (another unanswerable problem for evolutionistssee Origin of Life Q&A). If someone wants to argue that all
the information needed to make a human was there in the beginning, it just makes the origin of life even more immensely
difficult to explain!
Life is more than genes
But life is more than genes. The very concept of a gene as the basic unit of heredity that controls everything is being
seriously questioned. The ENCODE project in particular has blown away the idea that life is just about protein-coding genes,
although there were prior indications that this was incorrect (see No joy for junkies). In short, the rest of the DNA of humans
and other complex organisms is not junk, but incredibly important. Basically, it controls how the genes workfor example,
why it is that hemoglobin is only produced in red blood cells when all cells have the genes for hemoglobin protein. And it
controls the incredible sequencing of genes so that orderly embryo development occurs. See also Meta-information: An
impossible conundrum for evolution.So life is much more than genes. When we take into account the total DNA, rice has
466 million base pairs and humans have 3 billion (six times as much), which might be better for our egos than the
comparison of the number of genes. However, even the number of base pairs, or the picograms of DNA per nucleus, is no
adequate measure of genetic complexity. The smallest flowering plant genome is only about 0.1 picograms (flowering plant
range 0.10127.0 pg), whereas the largest alga is 19.6 picograms (algal range 0.0119.6 pg). 1 Clearly, flowering plants are
much more complex than algae, so there is more to complexity than a simple comparison of genome sizes.Protozoa (for
example amoebae) range from tiny to huge in their nuclear DNA amounts, some of them greatly exceeding the human
number of base pairs.2 It is not really understood why this is so. It could have something to do with cell size, where
organisms with large cells have a form of endoreduplication, where the DNA multiplies up to be able to provide enough
mRNA transcripts to supply the large cells protein requirements. Specialized, enlarged plant cells do this (I have measured
the relative amounts of DNA in the nuclei of such cells using microfluorimetry). Actual genome decoding does not suggest
that protozoan genomes are large in terms of numbers of different genes, although at present the largest amoeba genomes
have not been sequenced. Typical sequenced genomes of protozoans seem to be of the order of about 25 million base
pairs.3 I expect that the large protozoan genomes, when they are sequenced, will reveal large-scale duplication of genes,
such that that total number of different genes will be of the same order as other protozoans. In support of this, Amoeba
dubia, the one with the largest reported amount of DNA, is the largest sized amoeba cell known, being visible to the naked
eyeup to a millimetre in length. This compares with 0.009 mm diameter for a human red blood cell. Considering that the
volume of a cell scales with greater than the square of the radius, the volume of Amoeba dubia cells is huge (~10,000x)
compared to human cells. This almost certainly has something to do with the huge amount of DNA it contains.
Your statement,
The idea that biological complexity equates to genetic complexity is completely wrong,
begs the question, which is, What do you mean by biological complexity and what do you mean by genetic complexity? As
we have seen, genetic complexity is far more than just counting the number of protein-coding genes. Much of it is only just
beginning to be discovered.
Adding information?
In fact, there ARE actually such mutations, which will increase the volume of a genome and even add genes (they are due
to the activity of some viruses and of translocons, and to chromosomal recombination).
I think you meant to say transposons, not translocons, which are quite different (and a huge problem in themselves for
evolution to explain, but thats another story). Actually, movement of DNA with transposons or viruses does not create

any new information; it only transfers it around, as we have explained beforethis does not explain the origin of the genetic
information. But there is now strong evidence that transposons are not parasitic DNA or endogenous retroviruses at all,
willy-nilly shifting chunks of DNA around at random, but are involved in the regulation of gene activity during embryo
development, for example (see the No joy for junkies article).Recombination during meiosis also does not
create new information; it just selects from the existing alleles, giving different combinations in the offspring. Darwin made
the mistake of thinking that variety in offspring meant new features arising spontaneously, whereas we now know, following
the pioneering work of the famous creationist scientist, Gregor Mendel, that the variety is due to the recombination of
existing genes, not the creation of new ones. See Genetics: no friend of evolution. If Darwin had known what we know about
genes and mutations, he might not have become a Darwinist.Evolutionists also claim that genes can be duplicated and this
is an increase in information. But if you write an essay of 5,000 words and it needs to be 10,000 words, you wont get any
credit for photocopying (duplicating) the 5,000 to get the 10,000. Thats what evolutionists are claiming when they say that
virus transfer or duplication increases information. See also Does gene duplication provide the engine for evolution?
The question to Professor Dawkins was quite legitimate, as he himself readily admits in his voluminous works. Evolution has
to explain the origin of enormous quantities of information in living things (but it cant).
I hope this helps answer your questions.
Sincerely,
WHAT ABOUT JUNK DNA
Junk DNA: evolutionary discards or Gods tools?
by Linda K. Walkup
Summary
Junk DNA is thought by evolutionists to be useless DNA leftover from past evolutionary permutations. According to the
selfish or parasitic DNA theory, this DNA persists only because of its ability to replicate itself, or perhaps because it has
randomly mutated into a form advantageous to the cell. The types of junk DNA include introns, pseudogenes, and mobile
and repetitive DNAs. But now many of the DNA sequences formerly relegated to the junk pile have begun to obtain new
respect for their role in genome structure and function, gene regulation and rapid speciation. On the other hand, there are
examples of what seem to be true junk DNAs, sequences that had lost their functions, either to mutational inactivation that
could have occurred post-Fall, or by time limits set on their functions.Criteria are presented by which to identify legitimate
junk DNA, and to try to decipher the genetic clues of how genomes function now and in the past, when rates of change of
genomes may have been very different. The rapid, catastrophic changes in the earth caused by the Flood may also have
been mirrored in genomes, as each species had to adapt to post-Flood conditions. A new creationist theory may explain how
this rapid diversification came about by the changes caused by repetitive and mobile DNA sequences. The so-called junk
DNAs that have perplexed creationists and evolutionary scientists alike may be the very elements that can explain the
mechanisms.The last decade of the 20th century has seen an explosion in research into the structure and function of the
DNA in genomes of a wide range of organisms. As of April 2000, the whole genomes, or full DNA complements of over 600
organisms have been sequenced or mapped.1The sequence of the fruit fly genome, just completed, has over 130 million
base pairs (bp) and is the largest genome sequenced so far.2 The first complete human chromosome has been
sequenced,3 and the Human Genome Project expects to complete its work sometime in 2003, as does the Mouse Genome
Project. Researchers in the new field of genomicsthe comparison of the structures, functions and hypothetical
evolutionary relationships of the worlds life-formsare working furiously to deal with the huge inflow of data. Now more
than ever, scientists can see at the most basic level the similarities and differences of organisms, and are seeking to
understand how the blueprints of cells are decoded and regulated.A major goal of genomic studies is to understand the role,
if any, of the various classes of so-called junk DNA. Junk or selfish DNA is believed to be largely parasitic in nature,
persisting in the genomes of higher organisms as evolutionary remnants by their ability to reproduce and spread
themselves, or perhaps because they have supposedly mutated into a function the cell can use.
Origin of the junk DNA hypothesis
The idea that a large portion of the genomes of eukaryotes*4 is made up of useless evolutionary remnants comes from the
problem known as the c-value paradox, c meaning the haploid* chromosomal DNA content. There is an extraordinary
degree of variation in genome size between different eukaryotes, which does not correlate with organismal complexity or the
numbers of genes that code for proteins. For instance, the newt Triturus cristatus has around six times as much DNA as
humans, who have about 7.5 times as much as the pufferfish Fugu rubripes.5 The c-value between different frog species
can differ by as much as 100-fold.6 Early DNA-RNA hybridisation* studies and recent genome sequencing results have
confirmed that >90% of the DNA of vertebrates does not code for a product. Much of this variation is due to non-coding (i.e.
not producing an RNA or protein product), often very simple,
repeated sequences. With the discovery that many of these
sequences seemed to have arisen from mobile DNAs which are
able to reproduce themselves, the selfish or parasitic DNA
hypothesis was born.7,8 This said that these sequences served no
function in the host organism, but were simply carried on the
genome by their ability to replicate or spread copies of themselves
within and even between genomes.Plasterk stated it this way when
he wrote about transposons*, one of the junk DNA types:This
ability to replicate is a sufficient raison detre for transposons; they
have the same reason for living as, say, the readership of Cell:
none. They exist not because they are good, pretty, or intelligent,
but because they survive. 9
Just as Plasterk was wrong about our reason for living, he is wrong
about the purposes of these DNA sequences. Recent research has
begun to show that many of these useless-looking sequences do
have a function, and that they may have played a role in
intrabaraminic10 (within-kind) diversification.
Types of junk DNA
There are four major kinds of junk DNA:
introns, internal segments in genes that are removed at the RNA
level;
pseudogenes, genes inactivated by an insertion or deletion;
Figure 1. Only portions of a eukaryotic gene code
for a protein product.

satellite sequences, tandem arrays of short repeats; and


interspersed repeats, which are longer repetitive sequences mostly derived from mobile DNA elements.
Introns
After most eukaryotic genes and a very few prokaryotic* (bacterial) genes are transcribed, or copied into RNA, there are
segments that are cut out of the messenger RNA (mRNA) before it is used as a template to make a protein (Figure 1).
Introns in fact form the majority of the sequence of most genes, as was seen when human chromosome 22 was sequenced
(Table 1). Why are these RNA pieces present if they are only to be discarded? Evolutionary theory tries to explain these as
vestigial sequences, or that they are useful only as sites at which recombination can safely take place to reshuffle exons
(coding or protein making segments) into new proteins or new forms of these proteins. Their ubiquity in eukaryotes argues
that they are not post-Fall aberrations, but designed features.What then, could these throwaway segments be doing? There
are several possibilities emerging from recent research. One general regulatory role may be to slow down the rate of
translation*, as the splicing* process does take time. Alternative splicing allows greater diversity, as certain exons can be
skipped and spliced out to allow a different protein to be made from the same mRNA, as is seen in some viruses and in the
generation of diversity in antibodies. Another example is the CD6 gene, which is involved in T cell stimulation. Variable
splicing of exons gives rise to at least five different forms of the protein, which allows regulation of its activity. 11Another
observed mechanism by which introns can regulate gene activity is through the binding of the snipped-out intron RNA to
DNA or RNA. There are now a few examples of the role of introns in regulating the genes they are in, as well as other
genes. One interesting example is the lin-4 gene intron from the nematodeCaenorhabditis elegans. A developmental control
gene was found to reside in the intron of another gene (Figure 2). 12,13 The small RNA encoded by lin-4 binds to the mRNA of
another developmental gene, lin-14, blocking its ability to make protein. The binding site in lin-14 was in another supposedly
useless stretch of RNA, the 3' untranslated region (3UTR*) found after the last coding region. It was later found that lin4 RNA also binds to the 3UTR in another gene in the developmental pathway, lin-28.14 In fact, more and more cases of
3UTRs performing gene regulatory activities have been observed. 15,16There are examples of protein-encoding genes within
introns of other genes that have been recently discovered. For example, on human chromosome 22, the 61-kilobase (kb)
TIMP3 gene, which is involved in macular degeneration, lies within a 268-kb intron of the large SYN3 gene, and the 8.5-kb
HCF2 gene lies within a 27.5-kb intron of the PIK4CA gene.3
Some introns also play a role in mRNA editing, a process
where the A (adenine) residues in the mRNA are changed to G
(guanine).17 Self-complementary* or
exon-complementary
intron sequences, can bind to each other to form a hairpin loop
structure, allowing the sequence of the RNA to be changed
after transcription* from the DNA. Thus introns can cause new
messages to arise from a gene without altering its DNA coding
sequences.The most general function of introns may be to
stabilize closed chromatin* structures in, and around, genes
and their associated regulatory DNA elements. 18,19 An
isochore* is an approximately 300-kb segment of DNA whose
base pair composition is uniform above a 3-kb level, for
example 67% A-T bp.20 The general ability of an isochore to be
transcribed is dependent on the accessibility of its DNA, i.e.
how tightly histones* and other DNA-binding proteins wrap up
the DNA. This is seen as being at least partially dependent on
the A-T or G-C bp content of a segment of DNA. Though this
content can be skewed somewhat by the choice of triplet
codons*used in the coding DNA (since the code is
redundant21), exons are still constrained in their ability to vary
the bp content. The presence of introns throughout genes
allows the proper levels to be maintained, and indeed introns
reflect the general isochore type much more closely than the
coding regions. The presence of introns may well be a
condition for at least some forms of sectorial repression like
superrepression, where large sections of chromatin are altered
Table 1. Types and amounts of DNA sequence classes of
to turn off groups of cell-type-specific genes or developmental
the sequenced euchromatin* of human chromosome 22
genes. It was shown, for example, that the gene for rat growth
3
(after Dunham et al.).
hormone, when deprived of its introns, was no longer able to
1. All other types of interspersed repeats seen, not
form its normal more condensed structure when reinserted
detailed
here. back into cells.22It is important to know whether the specific
2. Tandem repeats from 2 to 5 bp in length (microsatellite
sequence of an intron is required for its function when
DNA). It is estimated that most of the remaining DNA not
constructing phylogenetic or family trees, or when determining
sequenced is satellite DNA, as tandem repeats are mostly
baraminic* placement of an organism. In evolutionary studies,
located in the heterochromatin, which was not sequenced.
DNA sequence comparisons are used to try to build
3. Includes all tandem and interspersed repeat types.
phylogenetic trees to trace ancestors to descendants. Since
introns are generally believed to be free from the constraints of
functionality when mutations cause changes in their sequence,
introns in a particular gene are often compared between organisms, with the bp differences seen between their sequences
supposedly indicating the degree and time of divergence since they last shared a common ancestor. In some instances, the
assumption that an intron is likely to have mutated freely and extensively during the presumed millions of years of
evolutionary history has proved wrong. Koop and Hood found that the DNA of the T cell receptor complex, a crucial immune
system protein, is 71% identical between humans and mice over a stretch of 98-kb of DNA. This was an unexpected finding,
as only 6% of the region encodes protein, while the rest consists of introns and non-coding regions around the gene. 23 Does
it follow then that we have a recent common ancestor with mice? Since this does not fit in with evolutionary theory, the
authors conclude instead that the region must have specific functions that place constraints on the fixation of mutations. This
illustrates that DNA sequence comparisons to establish evolutionary relationships are not the independent tests that they
are claimed to be. If the data do not support the desired evolutionary theory, ad hoc explanations of altered rates of
mutation, functional constraints, etc., can be brought in to explain away discrepancies.24

Another example of selective interpretation of DNA sequence comparison


data using introns is the study of an intron in an important sperm
maturation gene on the Y chromosome of humans.25,26 It was hoped that
the ancestry of modern humans could be traced by sequencing this 729bp intron from 38 different men from different ethnic groups. Surprisingly,
all 38 men had exactly the same sequence, which was then interpreted
as a recent common ancestor (27,000270,000 years ago) for the whole
human race, or possibly that the intron had functional constraints on its
mutability. This latter premise was rejected by the authors because the
sequence of the same intron in chimp, gorilla and orangutan was
progressively more different. These data would strongly support the
young age view that there was a severe bottleneck in the human
population when the Flood reduced the varieties of Y chromosomes to
the one shared by the survivors. Apes would not be expected to have
exactly the same sequence as humans, as they are from separate
created kind(s). The fact that they do have a similar intron argues for a
function for this sequence, and the intron may have been originally
created slightly different for proper function in an ape versus a
human.Thus evidence is mounting to support the important role of introns
Figure 2. Interaction of two junk RNAs
in gene regulation and chromosome structure, which would remove
regulates a developmental gene.
8.15%27 of the junk DNA of the human genome from the trash heap.
Pseudogenes*
Occasionally located near functional genes or gene families, there are sequences that very closely resemble other functional
genes, but have been inactivated in someway. Some have a mobile element inserted in their open reading frames (ORFs*),
others seem to be processed genes, i.e. they look as though the RNA from another gene has been reverse transcribed
(RNA used as a template to make DNA) and reinserted back into the DNA (Figure 3). A processed pseudogene* thus
precisely lacks the introns, possesses 3'-terminal poly-(A) tracts*, and lacks the upstream promoter* sequence required for
transcription of the corresponding parent gene. Pseudogenes are common in mammals, but virtually absent in
Drosphila.28 Nineteen percent of the coding sequences identified in human chromosome 22 were designated as
pseudogenes, because they had significant similarity to known genes or proteins but had disrupted protein coding reading
frames. 82% appeared to be processed pseudogenes.3 Many pseudogenes have additional mutations in them, presumably
because there is no functional constraint on their mutation. For example, the human beta-tubulin gene family consists of 15
20 members, of which five have these pseudogene hallmarks. 29 Some pseudogenes affect gene activity by binding
transcriptional factors that activate the normal gene. Whether this is intentional design or something the organism has
simply adjusted to is difficult to say. Many pseudogenes do seem to fit the profile of true junk DNAs.
Repetitive
DNA
sequences,
including
mobile
DNA
sequences
Repetitive DNA sequences form a
substantial fraction of the genomes
of many eukaryotes (Table 1, Table
2).30,31 This class includes satellite
DNA (very highly repetitive,
tandemly repeated sequences),
minisatellite and microsatellite
sequences (moderately repetitive,
tandemly repeated sequences),
the
new
megasatellites
(moderately repetitive, tandem
repeats of larger size) and
transposable or mobile elements
(moderately repetitive, dispersed
sequences that can move from site
to site; see Table 2).When first
discovered, they did not seem to
confer any benefit to the host
organism, as their ability to move
about the genome and/or cause
recombination between different
homologous copies has often
resulted in deleterious mutation
and disease. We now know that at
least some of these sequences
carry out important functions.
Satellite sequences
The functionality of a sequence of
2 or 3 bp repeated a thousand or
so times is not immediately
apparent. In addition, the lengths
and
compositions
of
these
repetitions often vary wildly
Figure 3. Comparison of integrated mobile DNA sequence structures.
between
species,
between
organisms of the same species, or
even between cells of the same organism. But greater understanding has come as scientists realize how DNA acts not only
as the information source for the cell, but also as the library in which it is housed. 32 It is beginning to be seen that the
dispensability of sequences is not an indicator of their non-functionality, and that in many cases, repetitive sequences tend
to fill functions collectively rather than individually.Satellite sequences vary in their repeat size and in their array size (Table

2). Microsatellites are the smallest, at a repeat size of as little as 2 bp, and the newly discovered megasatellite sequences,
which actually can contain ORFs, are 410 kb long. 33 The actual sequence repeated differs from species to species, and
repeats can differ slightly from one another. The number in an array can vary between individuals, which is why forensic
DNA fingerprinting techniques use mini- and microsatellite differences to identify individuals.

Table 2. Types of eukaryotic repetitive DNA sequences.

Functions of satellite sequences


The first recognised function of these types of sequences was in organising the centromeres, the constricted sites on each
chromosome where the chromosomes attach to cellular tethers and are pulled apart during meiosis and mitosis. These
sequences help condense the DNA region they are in into heterochromatin*.One hypothesis of the collective functionality of
repeat sequences is that long stretches of noncoding sequences act as tethers, permitting placement of groups of genes
into different zones in the cell nucleus.19 Transcriptionally inactive heterochromatin and the heterochromatin-like telomeric
sequences (sequences at the end of chromosomes), may associate their
respective chromatin segments much of the time with the nuclear
periphery. Very long runs of gene-poor, AT-rich isochores*, would be the
tethers that permit the gene-rich, GC-rich isochores to distribute
themselves into the appropriate nuclear zones for transcription and RNA
processing.The importance of the sequences of satellite DNA is reflected
when these sequences are mutated. A mutation in a minisatellite just
after the end of the Harvey ras gene (which encodes a growth regulatory
protein) may contribute to as many as 10% of all cases of breast, colorectal and bladder cancer, and acute leukemia. The mutant minisatellites
Figure 4. Mutations in minisatellite DNA can
bind a transcriptional regulatory factor,34 which causes an abnormal
result in cancer. Mutated satellite DNA near
increase in transcription of the Harvey ras gene (Figure 4).
the Harvey rasgene (a major regulator of cell
Retroviruses* and retroelements*
growth)
can
bind
a
protein
that
These class I mobile elements reproduce themselves through an RNA
increasesras activity.
intermediate which, in a reversal of the usual DNA to RNA transcription,

is reverse transcribed to DNA by the reverse transcriptase* enzyme encoded on intact elements. One of the remarkable
findings of the human genome project is that a high percentage (35.40%) of human nuclear DNA consists of dispersed
retroelements (Table 1).35 Short and long interspersed elements, SINEs and LINEs, make up the majority of this class of
DNA, with Alu and LINE-1 (L1), respectively, being most abundant in humans.36 L1 elements encode their own reverse
transcriptase, that probably is also responsible for the spread of SINEs, which lack this enzyme. HIV-1, the AIDS virus,
human endogenous retroviruses* (HERVs), and solitary long terminal repeats (LTRs*) apparently derived from HERVs, are
also part of this class of retroelements (Figure 3, Table 2).Most eukaryotic retrotransposons* move only sporadically in the
genome. An exception is the hybrid dysgenesis seen in Drosophila, where if flies containing a retrotransposon are mated to
flies not containing the particular retrotransposon, the element transposes with a high frequency, resulting in death or
mutation of many of the progeny. Host factors, many not well characterized as yet, seem to keep the transposition rate in
check (see below).
Functions of retroelements
Do these abundant elements have functions, or have hapless eukaryotic genomes been parasitized by selfish DNA? There
are more and more examples of these elements performing important functions. One example is the Alu family. This 300-bp
sequence (named for the enzyme used to identify it) occurs almost a million times in the human genome, up to 3.5% of the
total DNA (Table 2). It is estimated, and has been seen in many cloned genes, that there are 4 or 5 Alu elements in every
gene. Despite their number, they have been generally considered parasitic DNA, with occasional deleterious effects on the
genome when they exercise their ability to retrotranspose to sites in and near genes, or recombine with each other
abnormally. Such disruptions have caused neurofibromatosis, or elephant mans disease. 16 Mutations in the Alusequence
also have been associated with cancer. Alu sequences have been found to affect the functions of at least 8 different genes
(Table 2).37,38 Though Alu sequences do have internal promoters for RNA polymerase III (an enzyme which transcribes
genes encoding RNAs needed for translation of mRNA into protein), normally very little RNA is produced from all
these Alu sequences. However, under certain stressful conditions such as a viral infection, these transcripts increase
dramatically and affect protein synthesis levels to help the cell deal with the stress. 39 Thus, though individual Alu elements
have a very weak effect, hundreds of thousands of them together can affect protein synthesis.Epigenetic control
mechanisms, or modifications of gene activity that are due to modifications of the DNA itself and not its sequence (see
below), are associated with repeats. A repeat-induced process involving L1 retroelements has been hypothesised for Xchromosome inactivation, which is necessary to maintain proper gene dosage in females, who have two X chromosomes
(Table 2).40Endogenous retroviruses (that is, those that are obtained from inheritance rather than infection) can also affect
gene expression. The LTRs of two such viruses provides the sequence signal for the polyadenylation* of the mRNA of two
newly discovered human genes.41 An L1 repeat was found to provide the polyadenylation signal for the mouse thymidylate
synthase gene. Retrotransposons were also seen to help in repairing chromosomal breaks in yeast. Retroelements
modulate expression of many more genes.42
DNA transposons*
DNA transposons, or class II transposable elements, move from place to place by replicative transposition (that increases
the copy number) or by a simple cut-and-paste mechanism. Though in general not as common or in as high a copy number
as retroelements, they are still found in most organisms. Examples are the Drosophila P elements, bacterial transposons
such as Tn10 and Tn7, the Mu phage, and the ubiquitous mariner/Tc1 superfamily of transposons. The mariner/Tc1 family is
the most widespread, being found in most insects, flatworms, nematodes, arthropods, ciliated protozoa, fungi and many
vertebrates, including zebra fish, trout and humans.43 Copy number varies from two copies inDrosophila sechellia, to 17,000
in the horn fly Haematobia irritans, accounting for 1% of the genome. The vast majority of them appear to have been
inactivated by multiple mutations. The close homology between mariner/Tc1 elements found in species thought to have
diverged 200 million years ago has fuelled the hypothesis that these elements can transfer horizontally (that is, not by
normal inheritance) between different species, or even different phyla (see below). Again, the evolutionist gets to pick and
choose from his smorgasbord of explanations when the data do not fit the evolutionary tree.
Miniature inverted-repeat transposable elements (MITEs)
A recently discovered third class of mobile elements is the miniature inverted-repeat transposable elements (MITEs). 44
46
They are very small (125500 bp), and have short terminal inverted repeats. They were first found in plants, but have also
been found in nematodes, humans, mosquitoes and zebrafish.4750 They are found in the thousands and tens of thousands
per genome, and have been given colourful names (e.g. Tourist, Stowaway, Alien and Bigfoot) to reflect their apparent ability
to move about in the genome. Their mechanism of transposition is still unknown, but they appear to be DNA elements that
cannot move about on their own (non-autonomous). Though none seem to be presently active, they are believed to have
been mobile in the recent past because of the high levels of sequence similarity between elements in a particular family, and
the differences in insertion sites seen in closely related species. 51 MITEs are particularly interesting in terms of generating
genetic variation in that they are preferentially associated with genes
(see below).46,52
Effects of mobile and repetitive elements on gene expression
Mobile elements and repetitive elements can alter the structure and
regulate expression of the genome in several different ways. As
described earlier, transposition can disrupt genes by direct insertional
mutagenesis and can adversely affect transcription. Many
retrotransposons have strong constitutive (always on) promoters that can
cause inappropriate expression of downstream genes. If the promoter is
in the opposite direction of the gene, RNA complementary to the mRNA
of the gene can be made that can act as antisense RNA* that binds up
the mRNA, affecting translation.
Recombination between similar DNA strands is a necessary process for
repair of DNA breaks and allele*shuffling between homologous
chromosomes. But the presence of mobile and repetitive elements in
inappropriate positions can result in recombination products that are
Figure 5. Recombination between direct
deleterious, such as translocations*, inversions*, and other chromosomal
repeats causes the loss of the DNA between
rearrangements (Figure 5). For example, it was shown that a widespread
them.
chromosomal inversion commonly seen in Drosophila buzzatii is caused
by the recombination between two copies of a transposable element in
opposite orientations.53 There can even be an exchange of DNA between
non-homologous chromosomes: such as was seen in maize, in this case mediated by the recombination of one complete
and one partial copy of the Ac (Activator) transposable element.54

Target site selection in mobile DNA


Many of the retrotransposons and DNA transposons seem to have very little site-specificity in where they
integrate.55 Integration sites for most mammalian and Drosophila retroelements appear to be distributed more or less
randomly in the genome. Vertebrate retroviruses do have a general preference for insertion into regions with an open
chromatin configuration.56
However, there are some specific ones that do show target selectivity.51 R2 is a non-LTR retrotransposon that inserts
preferentially in the 28S ribosomal RNA genes of various insect species. Group II introns present in some yeast
mitochondrial genes (genes carried in the energy-producing organelles in the cell), are mobile elements very similar to poly
(A)-type retrotransposons. After copying themselves, they can reinsert precisely back into their spots between two exons.
Their ability to move argues for their spread into various genes at some point in time. The yeast retrotransposons Tyl and
Ty3, integrate preferentially upstream of genes transcribed by RNA polymerase III, which transcribes genes needed for
protein synthesis.Very recently, evidence has been found that certain P elements* containing regulatory sequences from
developmental genes, showed a high frequency of reinserting at the parent gene (homing) and preferential insertion at
another site containing regulatory genes.57The first example known of a host using the movement of a retrotransposon to its
advantage, was found in the telomere maintenance ofDrosophila. The telomeres, or chromosomal ends of Drosophila, are
maintained differently than any other known organism. Two retroposons, HeTA and TART, are present in multiple copies on
the telomeres, and will retropose specifically to the end of the telomere and heal a frayed chromosome.58
Observed regulation of mobile DNA
Epigenetic mechanisms, or reversible but heritable changes in chromatin structure, are seen to play a role in regulating
genes. Methylation of cytosine residues, modification of the DNA-binding histones, and production of antisense RNAs, are
some of the mechanisms by which gene expression can be modified without permanent genetic change to the gene
regulated.59 Methylation of the cytosine residues of DNA is used by the cell to turn off genes not currently needed. Cytosine
methylation inactivates the promoters of most viruses and transposons (including retroviruses and Alu elements). In fact,
transposons are so abundant, rich in CpG dinucleotides and heavily methylated, that we now know that the large majority of
5'-methylcytosine in the genome actually lies within these elements. 60 This prevents the movement of the elements under
normal circumstances. Thus transposable elements that integrate into promoters of genes can alter gene expression
patterns by attracting methylation or chromatin modifications to regulate the modified promoter. 53Drosophila, in general, are
very vulnerable to mutation by mobile element activity. From 5085% of all spontaneous mutations seen in the fruit fly are
due to transposon insertions.53 But Drosophila does have one type of host control in the recently identified gene
named flamenco.Flamenco normally acts to keep the gypsy retrotransposon in check. When flamenco is mutated, gypsy
transposes at a high frequency in germ line (reproductive) cells.50
Criteria for identifying junk DNA
There are several possible scenarios for the presence and function of the putative junk DNA sequences described above:
They all perform designed functions in present day organisms in their present form and location, though current research
has not revealed what those are as yet. This is unlikely, as it seems clear that in some individuals and species, the
placement or particular sequence of one of a family of non-coding DNAs can lead to deleterious effects such as cancer and
genetic disease. This would contradict the young age model of original perfect creation.All non-coding sequences could
have been created with functions, but some have lost their functions due to purposeful limitations, and/or accumulation of
mutations post-Fall. This would fit in with our observation of the rest of creation, where, though the perfection of design can
be seen, it has become obscured by consequences of the Fall, allowing death and suffering to enter the world.There is the
possibility that some of the elements, such as the mobile elements in particular, have never had designed functions. Rather,
they are pieces of degenerate DNA affected by the Fall that randomly move about and mutate genomes, causing only
deleterious effects.The ability of DNA sequences to rearrange and/or to move about in the genome or even between
genomes, was originally a heretical idea for both evolutionist and creationist, but now is one that is strongly supported as
being an integral part of gene regulation. Many systems utilizing similar recombination and rearrangement mechanisms are
necessary for important cellular functions, such as the process of DNA repair, rearrangement of DNA segments to form the
genes for the thousands of different antibodies, the yeast mating type switching system, the flagellar switching system
of Salmonella, and the antigen switching system of the malaria parasite. Therefore, the second scenario seems the most
likely.A working list of criteria needs to be developed to attempt to identify DNA sequences that may actually fit the category
of junk DNA. The presence of some junk DNA would be expected due to the fallen state of genomes. True junk DNA may
have one or more of the following characteristics:
The DNA element is present within another gene, insertionally inactivating it.
The DNA element is not found at that location in other members within the same species.
The effects of the presence of the element, if known, are deleterious, e.g. lead to cancer, genetic disease, etc.
The element can be deleted without any observed ill effects on the organism or many generations of its descendants.
The sequence of the element closely matches that of a mobile element, or contains a mobile element sequence.
For example, pseudogenes have many of these junk DNA characteristics, though their transformation into junk DNA may in
some cases have been intentionally arranged by the designer for the purpose of rapid diversification of created kinds.
The AGEing theory and diversification
There are, as described above, instances of functions for transposable DNAs, but until recently there has not been a
particular purpose ascribed to repetitive and mobile elements as a group. A new hypothesis formulated by genomicist and
creationist Wood addresses the past and present functions of mobile and repetitive DNA. 61Since these elements are
capable of rapid change of the genome, and can even be transmitted horizontally between species, he proposes that they
were designed to move about or recombine in the genomes of organisms to allow the rapid intrabaraminic diversification
seen in the 500 years or so after the Flood. He sees their role as being designed to act for a limited period of time, after
which they would be inactivated by mutation or repression by other regulatory elements. He proposes that such elements
should be renamed Altruistic Genetic Elements (AGEs) to emphasize that their purpose is different than that proposed for
selfish DNA.The AGEs are hypothesised to work by activating dormant genes or inactivating active genes, or by
horizontally transferring genetic information between species or possibly baramins with AGEs in the form of mobile
elements. The phenotypic changes would be primarily cosmetic, such as variations in size or coloration, or would involve
activation of a complex of genes needed to utilize a new environmental niche, like the Arctic foxs adaptation to cold. There
is a need for creationists to explain how a holobaramin such as the cat family, 62 could diversify into the many species of cats
that were present even in Jobs time in just a few thousand years or possibly a few hundred years. Currently observed
genetic mechanisms and natural selection are far too slow to explain this rapid speciation. A limited time period of AGE
activity could explain how this rapid diversification could occur.If, for example, the proposed AGEs were at work in the
diversification of the equines, we have the testable predication that differences in size, morphology and coloration could be
traced back to the genetic level by mobile or repetitive DNA elements located near genes controlling coloration.

Pseudogenes and relic retroviral sequences could then be the result of the action of an AGE gone wrong after its designed
activity began to fail. The AGEing theory could also solve the founding pair problemthat is, when a rare macromutation
occurs in an individual such that it cannot successfully hybridise with its parental species, this mutation is lost unless it can
mate with another animal with the same mutation.For this proposed AGEing process to work, at least three things must be
observed in putative AGEs:
They must show site specificity in where they insert, or evidence that they had such specificity in the past.Transmission of
AGEs between organisms horizontally and into germline DNA is required.We should see AGEs associated with genes
affecting size, morphology, coloration, and specialised environmental adaptation rather than housekeeping genes.As for the
first requirement, though many mobile elements are not specific in their target sites, there are examples of those that are, as
discussed above. Since AGE movement is supposed to have occurred largely in the past, we might expect to see only a few
with the intact capability.As for the second requirement, horizontal transmission*, the evidence for that occurring has
become very strong,63 and in the case of the P and gypsy elements in Drosophila, such transmission has actually been
observed occurring between species. Originally, no wild-caught D. melanogaster contained the P element and laboratory
stocks collected 60 years ago reflected this. Then gradually, more and more wild-caught flies contained the element
originally found in D. willistoni, until now all wild flies even in remote locations contain this element. 64Recently, it was also
shown that the copia retrotransposon from D. melanogaster was transferred to D. willistoni (probably via a parasitic
mite).65,66 There was also a report that gypsy-free fruit flies permissive for transposition of the gypsy retroposon could
incorporate gypsy into their germline DNA when larvae were fed on extract of infected pupae. 67 There is no obvious
evidence pointing to a functional change mediated by these horizontal transfers, but the principle is there.As for the third
requirement, are there any examples known now of mobile or repetitive elements that can cause these types of phenotypic
changes? In bacteria, there are many examples of transfers of antibiotic resistance mediated by transposons, 68 and the
horizontal transfer of genes, though in general prokaryotes have comparatively little junk DNA. Some evolutionary
researchers now propose that mobile elements may be involved in speciation. Mobility of a retroelement was activated in a
cross between two wallaby species, though the hybridisation resulted in only sterile males. 69 In maize, the original studies of
Nobel Prize winner Barbara McClintock demonstrated that the activity of the transposons in different corn kernel cells could
be followed by their effects on corn kernel coloration. In plants, there is additional strong evidence that movement of mobile
elements in the past has altered gene expression. Although retrotransposon sequences, for example, are seldom found
near genes in animals, recent analyses of plant mobile element insertion sites have revealed the presence of degenerate
retrotransposon insertions adjacent to many normal plant genes that act as regulatory elements. 70 In addition to
retrotransposons, MITEs are also found adjacent to many plant genes, where they also often provide regulatory sequences
necessary for transcription.71 Plants, as well as animals, would have had to adjust to the drastically-altered post-Flood world.
Other, more dramatic examples may exist, and further research will hopefully reveal them.
Why debunk junk DNA?
What is the relevance to creation science, and to people in general, of a better understanding of the function of these DNA
elements? Because of the publicity surrounding the Human Genome Project, there is increasing general interest in how our
genomes work, and what exactly they look like. There is more and more emphasis being placed on discovering our
evolutionary history through DNA, not fossils.The fact that functions are being found for junk DNAs fits in well with creation
science, but was not predicted by evolutionary theory, though of course the theory is being adjusted again to accommodate
the data. The intricate flexibility and specificity of these junk DNA sequences are a strong testimony to a designer who
plans and provides for the future of his creation.
Glossary
Allele
Antisense RNA

one of several alternate forms of a gene occupying a given locus on a chromosome. Return to text.
RNA made by copying the other DNA strand in a coding segment in the opposite direction; this RNA
will bind to the mRNA made from the coding or sense strand. Return to text.
Baramin
the creationist term for an original created kind; not synonymous with species. Organisms within the
same baramin may be of different species but can cross-hybridise, like the horse and the
donkey.Return to text.
Complementary
two strands of DNA or RNA are said to be complementary when they can form base pairs (A-T, GC) with each other, e.g. AATTCC and TTAAGG. Return to text.
Chromatin
the complex of DNA and protein in the nucleus of the interphase cell. Return to text.
Euchromatin
the less condensed chromatin in the nucleus that is more transcriptionally active than the
heterochromatin.Return to text.
Eukaryote
an organism with an organized nucleus. Return to text .
Haploid
half the set of the chromosome pairs; contains one copy of each chromosome pair and one of the
sex chromosomes; characteristic of gametes (sperm and egg cells). Return to text.
Heterochromatin
regions of the genome that are in a highly condensed state and are not usually transcribed.
Constitutive hetereochromatin is always in this condensed, inactive state, contains no genes, and is
usually found at the centromeres and teleomeres. Facultative heterochromatin is condensed only in
certain cell types, or at certain developmental stages when the genes contained in it need to be
turned off. Return to text.
Histones
a family of basic proteins found tightly associated with DNA in all eukaryotic nuclei; their binding
forms a bead structure called a nucleosome. Return to text.
Horizontal
when mobile elements or viruses are transferred between individuals by infection rather than by
transmission
inheritance (vertical transmission). Return to text.
Human
endogenousretroviruses that have become part of the human genome in the past by insertion into the germline
retroviruses (HERVs) cells. Return to text .
Hybridisation
the pairing of single-stranded complementary RNA and/or DNA strands to give an RNA-DNA or
DNA-DNA hybrid.Return to text.
Inversion
occurs when recombination between DNA segments causes the DNA between them to be flipped
into the opposite orientation at the same chromosomal locus. Return to text.
Isochore
LTR

an approximately 300 kb segment of DNA whose bp composition is uniform above a 3 kb level, for
example 67% A-T bp. This is believed to enable a certain level of co-regulation of all the DNA in the
isochore. Return to text .
long terminal repeat; the longer, more complex repeated sequences at the ends of some mobile
elements, which are required for them to transpose. Return to text.

ORF

open reading frame; a stretch of DNA or RNA that contains of series of triplet codons coding for
amino acids, without any protein termination codons, that is potentially translatable into
protein. Return to text.
P elements
DNA transposons found in fruit fly species that often have a high level of mobility. Return to text.
Promoter
a region of DNA involved in binding of RNA polymerase to initiate transcription. Return to text.
Poly-(A) tail
a sequence of adenine residues added to the 3 end of a mRNA after transcription in the process
called polyadenylation; believed to help stabilize mRNAs from being degraded. Return to text.
Pseudogene
a gene that has been inactivated in the past by an insertion or deletion of DNA. Return to text.
Prokaryote
an organism that lacks an organized nucleus, and has its DNA mostly in a single molecule; a
bacterium. Return to text.
Processed
a gene that has been apparently reverse-transcribed from its mRNA back into DNA and reinserted
pseudogene
into a chromosome. It thus lacks its introns, has a poly-A tail, and often is bounded by the
characteristic direct repeats associated with transposition. Return to text.
Retroelement
any sequence that transposes through an RNA intermediate. Return to text.
Retrotransposons
mobile elements that encode reverse transcriptase. Transpose through an RNA intermediate.
Classed
into
LTR-containing
and
poly
(A)containing:
LTR-containingsimilar to proviral form of vertebrate retroviruses and usually have 2 ORFs, gag
and pol (protease, integrase, reverse transcriptase, RNase H), e.g. Gypsy and tom.
Poly (A)containing retroelements or retroposons lack LTRs and have a 3 A-rich region. Have 2
ORFs, gag and pol. Some elements such as L1 and the I Factor of Drosophila, contain a reverse
transcriptase. L1 is found in yeast and humans. Return to text .
Retrovirus
a virus using RNA as its information storage system rather than DNA, integrates into host DNA as
part of its lifecycle in a way very similar to retrotransposons, but also has additional genes that code
for its packaging into virus particles for infection of other hosts. Return to text.
Reverse transcriptase an enzyme found in retroelements that will make a complementary DNA strand from an RNA
template. Return to text.
Splicing
two exons, or coding regions on a messenger RNA, are joined together when the intron (noncoding segment) between them is removed. Return to text .
Translation
the synthesis of protein on the messenger RNA template. Return to text.
Translocation
of a chromosome describes a rearrangement in which part of a chromosome is detached by
breakage and then becomes attached to some other chromosome. Return to text.
Transcription
synthesis of RNA on the DNA template. Return to text .
Transposase
the enzyme that cuts the target DNA and splices in the transposing sequence; called the integrase
in retroelements.
Transposon
any DNA sequence that can move about the genome, either by replicating itself, or by a cut-andpaste mechanism. In its simplest form, it is a transposase gene (see above) surrounded by a
sequence on either side repeated directly or in inverse form, e.g. ATTGCGC and CGCGTTA are
inverted repeats. Return to text .
Triplet codon
three nucleotides in an RNA or DNA that signal the insertion of a particular amino acid or
termination signal; e.g. AUG would be the code word for methionine. Return to text.
UTR
untranslated region; the parts of a messenger RNA before the first exon (5 prime UTR) and after
the last exon (3 prime UTR) that are not translated into protein (non-coding). Return to text.
Acknowledgements
The author wishes to thank Dr. Todd C. Wood for providing unpublished information on his AGEing theory and his rice
genome research. Thanks also to the editor for helpful discussions, information and patience with revisions.
The slow, painful death of junk DNA
by Robert W. Carter
So-called junk DNA has fallen on hard times. Once the poster child
of evolutionary theory, its status has been increasingly challenged
over the past several years. Functions for junk DNA have been cited
at other places on this website 1 and in the Journal of Creation2.
In The Great Dothan Creation Evolution Debate,3 my opponents
main argument, to which he returned again and again, rested on
junk DNA. I warned that this was an argument from silence, that
form follows function, and that this was akin to the old vestigial
organ argument (and thus is easily falsifiable once functions are
found). We did not have to wait long, however, because a new study
has brought the notion of junk DNA closer to the dustbin of
discarded evolutionary speculations. Faulkner et al. (2009)4 have put
junk DNA on the run by claiming that retrotransposons (supposedly
the remains of ancient viruses that inserted themselves into the
genomes of humans and other species) are highly functional after
all.
Background
Based on the work of J.B.S. Haldane (Haldane 1957) 5 and others,
who showed that natural selection cannot possibly select for millions
of new mutations over the course of human evolution, Kimura
(1968)6 developed the idea of Neutral Evolution. If Haldanes
Dilemma7 was correct, the majority of DNA must be non-functional.
It should be free to mutate over time without needing to be shaped
by natural selection. In this way, natural selection could act on the
important bits and neutral evolution could act randomly on the rest. Since natural selection will not act on neutral traits,
which do not affect survival or reproduction, neutral evolution can proceed through random drift without any inherent cost of

selection.8 The term junk DNA originated with Ohno


(1972),9 who based his idea squarely on the idea of Neutral
Evolution. To Ohno and other scientists of his time, the vast
spaces between protein-coding genes were just useless DNA
whose only function was to separate genes along a
chromosome. Can you see how the idea of junk DNA came
about? It is a necessary mathematical extrapolation. It was
invented to solve a theoretical evolutionary dilemma. Without it,
evolution runs into insurmountable mathematical difficulties.To
recap for emphasis: Junk DNA is not just a label that was tacked
on to some DNA that seemed to have no function; it is something
that is required by evolution. Mathematically, there is too much
variation, too much DNA to mutate, and too few generations in
which to get it all done. This was the essence of Haldanes work.
Without junk DNA, evolutionary theory cannot currently explain
how everything works mathematically. Think about it; in the
evolutionary model there have only been 36 million years since
humans and chimps diverged. With average human generation
times of 2030 years, this gives them only 100,000 to 300,000 generations to fix the millions of mutations that separate
humans and chimps. This includes at least 35 million single letter differences, 10 over 90 million base pairs of non-shared
DNA,10nearly 700 extra genes in humans (about 6% not shared with chimpanzees),11 and tens of thousands of chromosomal
rearrangements. Also, the chimp genome is about 13% larger12 than that of humans, but mostly due to the heterochromatin
that caps the chromosome telomeres. All this has to happen in a very short amount of evolutionary time. They dont have
enough time, even after discounting the functionality of over 95% of the genomebut their position becomes grave if junk
DNA turns out to be functional. Every new function found for Junk DNA makes the evolutionists case that much more
difficult.One of the important classes of junk DNA is retrotransposons, which were thought to be leftovers from ancient
virus infections where bits of DNA from the viruses had been randomly inserted into the DNA of humans (for example).The
idea that huge stretches of human DNA are useless junk left over from evolution is itself having to be progressively
junked.Enter Faulkner et al. (2009). Working in human and mouse, they discovered that between 6 and 30% of
RNAs13 start within retrotransposons. Their distribution is clearly not random. This was a shock in itself, but they added that
these RNAs are generally tissue-specific, as if there were different classes of retrotransposons involved in regulating gene
expression in different tissues. From the start, their conclusions do not seem to support the idea that retrotransposons are
evolutionary junk, but it gets better from there. It turns out that retrotransposons coincide with gene-dense regions and occur
in pronounced clusters within the genome, emphasizing the non-random distribution pattern. When they occur upstream of
protein coding genes, they provide an abundance of alternative start sites for transcription, producing abundant alternative
mRNAs and non-coding RNAs. On the downstream end, over one quarter of RefSeq (protein-coding) genes14 have a
retrotransposon in their 3 UTRs,15 and these reduce the amount of protein synthesized. They concluded that these 3 UTRs
are the site of intense transcriptional regulation. This is hardly something one would expect from junk DNA! Based on the
distribution of retrotransposons, they identified a whopping 23,000 candidate regulatory regions within the genome. In
addition, they found 2,000 examples of bidirectional transcription caused by the presence of retrotransposons (where the
DNA is read in both directions, not just one direction, which is thought to be the norm).At one point Faulkner et al. try to
downplay their results. They point out that only some retrotransposons contain active promoters and that only some of these
are functional. They do not advocate a universal function for retrotransposons. However, as Faulkner et al. also point out,
retrotransposons are highly abundant, with thousands of retrotransposon promoters immediately adjacent to protein coding
genes, influencing their regulation and, they assume, their evolution. They concluded that retrotransposons have a key
influence on transcription genome-wide, that they are multifaceted regulators of the functional output of the mammalian
transcriptome, that they are a pervasive source of transcription and transcriptional regulation, and that they must be
considered in future studies of the genome as a transcription machine.These results are stunning. With genome regulation
becoming more and more complicated, and with more and more of the genome being demonstrated to be functional, one
wonders how long evolutionists can hold to the idea of junk DNA? However, hold on to it they must, for without it they lose
one of their best arguments. But they just lost one of their favorite pieces of evidence: the presence of ancient deactivated
viruses in the genome. Rather than being functionless vestigial remnants of our past, retrotransposons turn out to be
functionally integrated into the amazingly complex regulatory apparatus of mammalian genomes!Id like to point out that
young-earth creationists do not require the entire genome to be highlyfunctional. While I suspect that direct and indirect
controls of transcription will eventually be found for most of it, there may be very large stretches of the genome that just add
temporal structure to the functional parts. Think of them as scaffolding in a three-dimensional genomic skyscraper. Even
these portions will be functional (because of a need for structure), though they may not contribute directly to genome
regulation, and their sequence specificity might be very weak. Well have to wait to see how it all works out in the end. For
now, let us take heart that one more weak link in the evolutionary line of arguments has been exposed.
No joy for junkies
by Don Batten
Before any sequencing of DNA had been done, evolutionists decided that fully 99% of
the human DNA must be inert or junk. They came to this conclusion because, according
to the calculations of population geneticists, if much more than 1% of the DNA sequence
of creatures such as humans actually mattered, then error catastrophe would have
resulted, because natural selection could not have eliminated the large number of
harmful mutations.1When the DNA sequencing turned up only about 35,000 proteincoding genes in humans, the evolutionists seemed vindicated, except that we already
knew that DNA codes for more than just proteins. For example, the transfer-RNAs and
ribosomal RNA are coded on the DNA. And various segments of DNA-coded RNA were
being implicated as co-factors in various chemical reactions and in gene activation or
suppression. But what else does all that DNA do?Bit by bit, the idea of junk DNA has
been unravelling. There have been reviews and notes in Journal of Creation25covering
some of the exciting developments.Recently, a large chunk of the remaining junk has
been implicated in the control of embryo development. Scientists at the Jackson

Laboratory, Maine, USA, found that a type of transposable element (TE), a major class of supposed junk or parasitic DNA,
activates during embryo development in mice.6 In a commentary on this work, Ricky James commented:Therefore, more
than one third of the mouse and human genomes, previously thought to be non-functional, may play some role in the
regulation of gene expression.7Note that this non-coding DNA only seems to function during egg and embryo development,
so studying TEs in other cells would not reveal their function. This might explain why the functions of non-coding DNA have
been so elusive.These developments underline, once again, how evolutionary premises impede the progress of science. In
the past, evolutionary notions led to over 100 human features being labeled vestigial, or leftovers of our supposed animal
ancestry.8 This was based on the similarity of these features to ones found in animals, combined with the lack of knowledge
about what the organs did. The lack of logic is astonishing: since we dont know what the organs do, they must be useless.
The same evolutionary logic has been applied to the DNA: we dont know what most of it does, so it must do nothing. So it
is labelled junk, pseudogenes, parasitic, retroviral inserts, etc.Thankfully, not everyone bought this idea. In the late
1980s, New Zealandborn Australian immunologist Malcolm Simons recognized patterns, or order, in the non-coding DNA
that indicated to him that the code must have a function, but others ridiculed the idea. 9 In the mid-1990s, he patented the
non-coding DNA (95%) of all organisms on Earth. The company he founded, Genetic Technologies, now reaps licence fees
from all technologies being developed to cure disease that involve the non-coding DNA. Its quite controversial, of course,
paying such licence fees. And since factors involved in all sorts of diseases, such as breast cancer, Crohns disease,
Alzheimers, heart disease, ovarian and skin cancer, are being found in the junk, Genetic Technologies is doing quite
well.10Theres much gold to be mined from the junk, it would seem.Leading geneticist Prof. John Mattick of the University of
Queensland in Brisbane, Australia, has proposed that the non-coding DNA was part of a sophisticated operating system,
with ample justification.11,12 Some critics rejected this on the grounds that such a system could not have evolved! Mattick
recently said that the failure to recognise the implications of the non-coding DNA will go down as the biggest mistake in the
history of molecular biology. 9 This mistake can be attributed to an evolutionary approach to biology.Creationists have long
argued that junk DNA is nothing of the sort. For example, Carl Wieland, Creation Ministries International (Australia), wrote,
Creationists have long suspected that this junk DNA will turn out to have a function. 13 Although there might be
a small amount of non-functional DNA due to damaging mutations that have occurred, it is inconceivable that most of the
human DNA would be created as having no function.
Large scale function for endogenous retroviruses
by Shaun Doyle
Endogenous retroviruses (ERVs) are some of the most cited evidences for
evolution. They are part of the suite of junk DNA that supposedly comprised the
vast majority of our DNA. ERVs are said to be parasitic retroviral DNA
sequences that infected our genome long ago and have stayed there ever
since. These short DNA strands are found throughout the human genome, and
make up about 5% of the DNA,1 or about 10% of the total amount of DNA that is
classified as transposable elements (i.e. 50%). 2However, the term endogenous
retrovirus is a bit of a misnomer. There are numerous instances where small
transposable elements thought to be endogenous retroviruses have been found
to have functions, which invalidates the random retrovirus insertion claim. For
instance, studies of embryo development in mice suggest that transposable
elements (of which ERVs are a subset) control embryo development.
Transposable elements seem to be involved in controlling the sequence and
level of gene expression during development, by moving to/from the sites of
gene control.3Moreover, researchers have recently identified an important
function for a large proportion of the human genome that has been labelled as
ERVs. They act as promoters, starting transcription at alternative starting points,
which enables different RNA transcripts to be formed from the same DNA
sequence.We report the existence of 51,197 ERV-derived promoter sequences
that initiate transcription within the human genome, including 1,743 cases where
transcription is initiated from ERV sequences that are located in gene proximal
promoter or 5 untranslated regions (UTRs).4
And,
Our analysis revealed that retroviral sequences in the human genome encode
tens-of-thousands of active promoters; transcribed ERV sequences correspond to 1.16% of the human genome sequence
and PET tags that capture transcripts initiated from ERVs cover 22.4% of the genome. 5So were not just talking about a
small scale phenomenon. These ERVs aid transcription in over one fifth of the human genome! These data illustrate the
potential of retroviral sequences to regulate human transcription on a large scale consistent with a substantial effect of ERVs
on the function and evolution of the human genome. 3 This again debunks the idea that 98% of the human genome is junk,
and it makes the inserted evolutionary spin look like a tacked-on nod to the evolutionary establishment. These results
support the conclusions of the ENCODE project, which found that at least 93% of DNA was transcribed into
RNA.Evolutionists have used shared mistakes in junk DNA as proof that humans and chimps have a common ancestor.
However, if the similar sequences are functional, which they are progressively proving to be, their argument evaporates.It
seems that evolutionist Dr John Mattick, director of the Institute for Molecular Bioscience at the University of Queensland,
Brisbane, Australia, was spot on in his assessment of the gravity of the junk DNA error:The failure to recognize the full
implications of thisparticularly the possibility that the intervening noncoding sequences may be transmitting parallel
information may well go down as one of the biggest mistakes in the history of molecular biology. 6Both creationists7 and
ID proponents8 predicted that transposable elements, such as endogenous retroviruses, would have a function. In 2000,
creationist molecular biologist Linda Walkup proposed that transposable elements could be created to facilitate variation
(adaptation) within the created kinds.7If the junk DNA is not junk, then it puts a big spanner in the work of molecular
taxonomists, who assumed that junk DNA was free to mutate at random, unconstrained by the requirements of functionality.
As Williams points out:The molecular taxonomists, who have been drawing up evolutionary histories (phylogenies) for
nearly every kind of life, are going to have to undo all their years of junk DNA-based historical reconstructions and wait for
the full implications to emerge before they try again.9

Hox (homeobox) GenesEvolutions Saviour?


by Don Batten
Some evolutionists hailed homeobox or hox genes as the saviour of evolution soon after they were discovered. They
seemed to fit into the Gouldian mode of evolution (punctuated equilibrium) because a small mutation in a hox gene could
have profound effects on an organism. However, further research has not born out the evolutionists hopes. Dr Christian
Schwabe, the non-creationist sceptic of Darwinian evolution from the Medical
University of South Carolina (Dept. of Biochemistry and Molecular Biology), What is the REAL message of the patterns
wrote:
of life?
Control genes like homeotic genes may be the target of mutations that would
The
Biotic
Message
conceivably change phenotypes, but one must remember that, the more
Walter
ReMine
central one makes changes in a complex system, the more severe the
peripheral consequences become. Homeotic changes induced
This book scientifically fights
in Drosophilagenes have led only to monstrosities, and most experimenters
evolutionists on their terms,
do not expect to see a bee arise from their Drosophila constructs. (Mini
on their issues, using their
Review: Schwabe, C., 1994. Theoretical limitations of molecular
testimony, and their ground
phylogenetics
and
the
evolution
of
relaxins. Comp.
Biochem.
rules. It dismantles many
Physiol.107B:167177).Research in the six years since Schwabe wrote this evolutionary illusions, and offers a new
has only born out his statement. Changes to homeotic genes cause creation theory of biology: Life was
monstrosities (two heads, a leg where an eye should be, etc.); they do not designed to shout that it had only ONE
change an amphibian into a reptile, for example. And the mutations do not designer, and to resist all other
add any information, they just cause existing information to be mis-directed to explanations. 538 pages, hardbound.
produce a fruit-fly leg on the fruit-fly head instead of on the correct body
segment, for example.Evolutionists, of course, use the ubiquity of hox genes See also review by Dr Don Batten
in their argument for common ancestry (Look, all these creatures share these ORDER YOUR COPY TODAY
genes, so all creatures must have had a common ancestor). However,
commonality of such features is to be expected with their origin from the same (supremely) intelligent designer. All such
homology arguments are only arguments for evolution when one excludes, a priori, origins by design. Indeed many of the
patterns we see do not fit common ancestry. For example, the discontinuity of distribution of hemoglobin-like proteins, which
are found in a few bacteria, molluscs, insects, and vertebrates. One could also note features such as vivipary,
thermoregulation (some fish and mammals), eye designs, etc. For more detail, see The Biotic Message.
Hox Hype
Has Macro-evolution Been Proven?
By David A. DeWitt, Ph.D
Associate Professor of Biology, and Associate Director, Creation Studies at Liberty University
From the hype of the press release, it would seem that evolution was finally proven once and for all and the creationists
should just give up and go home. But far from refuting creation, the scientific evidence is completely consistent with
creation!The press release from UCSD said in part:Biologists at the University of California, San Diego have uncovered the
first genetic evidence that explains how large-scale alterations to body plans were accomplished during the early evolution
of animals. The achievement is a landmark in evolutionary biology, not only because it shows how new animal body plans
could arise from a simple genetic mutation, but because it effectively answers a major criticism creationists had long leveled
against evolutionthe absence of a genetic mechanism that could permit animals to introduce radical new body
designs.Evolutionary biologists believe that the six-legged insect body plan evolved from crustacean-like ancestors
(including creatures like shrimp) that lost the large number of legs. 1 Such a radical change would require mutation(s) that
result in the suppression of leg development. McGinnis and coworkers believed that they found the mutation and the gene
responsible for this change. However, careful examination of their efforts reveals that the situation is much more
complicated.The scientists were investigating Ubx, a Hox gene which suppresses leg development in flies. Hox genes are
master control switches that control the body plan. Specific Hox genes may control where the head forms, where limbs
form, or a tail or even wings. These master switches work like circuit breakers and either turn on or turn off an array of other
genes. Hox genes can be expressed in abnormal locations and either prevent development of structures or promote their
development in very unusual places. For example Pax-6 expression controls the development of eyes. A fly with abnormal
expression could form an eye on a leg, the antenna or even abdomen. 2The researchers found that the Ubx gene from a fly
completely prevented leg development while the same gene from Artemia, a brine shrimp, only suppressed leg development
15%. They then mutated the Artemia Ubx gene and found that this version was much more effective at blocking leg
formation. They postulated that such a mutation probably occurred in the crustaceans that were the ancestors of six-legged
insects.3The fact that scientists can significantly alter the body plan does not prove macro-evolution nor does it refute
creation. Successful macro-evolution requires the addition of NEW information and NEW genes that produce NEW proteins
that
are
found
in
NEW
organs
and
systems.
For example, a single mutation that might prevent legs from forming is much different from a mutation that produces legs in
the first place. Making a leg would require a large number of different genes present simultaneously. Moreover, where do
the wings come from? Just because an organism loses a few legs doesnt convert a shrimp-like creature into a fly. Since
crustaceans dont have wings, where does the information come from to make wings in flies?Having the wings themselves
is not even enough. Researchers in another study have found that the subcellular location of metabolic enzymes is
important for the functional muscle contraction required for flight. 4 Indeed, the metabolic enzymes must be in very close
proximity with the cytoskeletal proteins that are involved in muscle contraction. If the enzymes are not in the exact location
in which they are needed within the cell, the flies cannot fly. This study bears out the fact that the presence of active
enzymes in the cell is not sufficient for muscle function; colocalization of the enzymes is required. It also requires a
highly organized cellular system.Therefore, changes in body planno matter how dramaticdo not automatically prove
macro-evolution. Losing structures, or misplacing their development, should not be equated with the increased information
that is needed to form novel structures and cellular systems.

HOW DOES GENETICS POINT TO DESIGN


Cell systemswhats really under the hood continues to drop jaws
By Brian Thomas
Two 2009 papers summarized recent discoveries of utterly unforeseen intricacy, adaptability, robustness and precision in
regulating gene expression, even in simple cells.
Gene expression in eukaryotic cells
I conservatively counted 24 recently discovered mechanisms that help regulate gene expression in eukaryotic cells, as
reviewed by Moore and Proudfoot.1 Here are just a few of them.
Figure 1. Widely regarded as the simplest
genome, Mycoplasma gene expression is
instead far more complicated than expected. It
performs functions that had been considered
the sole domain of higher eukaryotes. For
example, DNA is transcribed in both the sense
and antisense directions, indicating that
valuable genetic information is double-stacked.
RNA transcripts undergo post-translational
modifications, single enzymes have more than
one application, and when certain metabolic
breakdowns occur, the cell is able to formulate
a workaround solution. Illustration after.
sciencemag.orgChromatin is not loosely
wadded DNA inside cellular nuclei. Instead, it is
very precisely organized, with specific portions
dynamically looped outward. Each loop is
associated with a separate nuclear pore, and
can retract to a storage position when
appropriate. Robust and efficient machinery
ensures that the correct portions of chromatin
are unspooled from nearer the center of the
nucleus to an appropriate nuclear pore. Each pore is extremely active, with a host of interacting regulatory RNAs, proteins,
and ribonucleoproteins.2 These send and receive communications from and toward the farthest ends of the RNA and protein
manufacturing processes.RNA Polymerase does not typically transcribe DNA in fluid space, but is attached to a cadre of
proteins associated with each nuclear pore. This way, the rapidly emerging RNA transcript is already proximal to the pore,
through which much of it will exit to the cytoplasm. Further, cell biologists have determined that the first copy of a transcript
is like a practice run. This first, rough draft RNA transcript either serves as a quality control run, so that its integrity is
ensured prior to full manufacture and export from the nucleus, as a primer for the total set of transcript processing
machinery to be properly set, as a chemical communicator providing information to downstream processes, or all three.
Warming up for transcription
In addition, extracellular messages are transferred from the cell membrane to the nuclear pore sites via biochemical
cascades, and these influence whether or not a gene region will switch from being transcribed into these rough abortive
transcripts, or into full-length, properly marked and exported transcripts. It appears that transcription machinery is constantly
transcribing in an idle mode, but when the correct switches are tripped, the machinery fully engages. In full production
mode, RNA transcripts often become marked for translation to proteins. Some of the switching messengers are proteins that
are temporarily restrained by other proteins, which in turn can release them upon detection of certain cell signals carried by
yet more precisely interacting biochemicals. For example, even sugar moieties riding on proteins have been found to act as
a safety switch that regulates the microswitches which fine tune protein expression during cell division.3
Full-on eukaryotic transcription runs super-fast
When all systems are go, transcription proceeds with fully processive elongation of the full body of the gene. 1 Inside the
nucleus, the relevant DNA is pulled, like a loop of magnetic tape, across a nuclear pore. Some of the proteins involved in
this action are named Set1PAF, Spt6, FACT, Chd1, along with other histone proteins. This way, the emerging transcript is
under the constant watchful attention of a wide array of sensory, quality control, marking, and transporting machinery, all
kept near the pore by precise chemical interactions specified by exactly arranged biomolecular sizes, shapes, charges, and
polarities.It was known that transcripts in eukaryotic cells undergo cut-and-pasting as well as splicing. It is now known that
this occurs simultaneously with manufacture, and requires a separate host of proteins. However, those pre-mRNA splicing
proteins directly interact with the RNA polymerase assemblage, which all works together to react to pause-sites in the gene
it is transcribing. RNA polymerase acts like a molecular juggernaut,1 streaming RNAs out as though through a jet engine. It
must be slowed down in order for cutting and splicing machinery to have opportunity to insert. Since not all DNA pause sites
become RNA cut sites, and since the alternative combinations of cut and spliced mRNA transcripts can specify a wide
variety of regulatory or catalytic RNAs and proteins from just one gene, 4 it is apparent that somehow precise
communication occurs to discern which pause sites will result in cuts.In yeast, a model eukaryote, the THO/TREX protein
complex serves three roles: one in transcription, one in transcript-dependent recombination, and one in mRNA
export.1 And it does these while in constant communication with machine parts that are involved in transcript initiation as
well as parts involved in slowing and stopping transcription. It is therefore one of many proteins and protein complexes that
are being discovered with multiple functionsa clear sign of elegant engineering.
Process flow management in translation
The emerging RNA transcript then gets labeled with specific protein markers. The markers had already been gathered to the
nuclear pore site, and are presented to the nascent transcript just inside the nucleus. The immediacy of labeling thus is vital.
It guards against the dangers of having naked RNAs in the nucleus, as described below. The markers, too, serve multiple
purposes. The more splices in the transcript, the more markers are attached, and this eventually causes more efficient
translation because a transcript thus bedecked is more likely to have some surface exposed to cytoplasmic proteins vital to
translation. The markers also signal watchdog nuclear pore proteins to expedite the transcripts export.These same

watchdog proteins also serve to prevent naked transcripts from re-entering the nucleus. This is vital, for bits of RNA naturally
anneal to unzipped DNA. If this happened, it would quickly create havoc in the nucleus by both generating mutations and
gumming up the many nuclear processes that depend on accurate DNA recognition, clamping, spooling, unwinding, and
other processes.After export, the cytoplasmic machinery links each transcript to other machines. Some of these shepherd
the transcript toward a ribosome. Each time a transcript has been thus shepherded, some of its markers are removed, with
most being lost after its first round of translation. Eventually the transcript becomes naked and difficult for translational
machinery to detect, and subject to degradation. In this way, the freshest and highest quality transcripts are by far most
translated by the ribosome.
Eukaryotic gene expression is astonishing
Effective quality control mechanisms constantly cull corrupt transcripts. For example, if a transcript did not have the correct
signal sequence attached when it was first formed, due to gene mutation or an error in processing, the compromised
molecule would have been recognized immediately at the nuclear pore, and degraded by RNase enzymes. This ensures
that downstream processes are not gummed up with useless transcripts. Quality control is critical to forming the correct
products in the needed amounts, and at appropriate paces.Other systems produce a stockpile of quality transcripts in
strategic pockets within the cytoplasm. This way, there can be a tightly controlled burst of the desired [protein]
product.1There is no indication that the discovery pace of more mind-bogglingly brilliant cell processes will slow down
anytime soon. If none of the above made sense, then let the reader be edified by the glowing research summary:At every
point along the way, multifunctional proteins and [ribonucleoprotein] complexes facilitate communication between upstream
and downstream steps, providing both feedforward and feedback information essential for proper coordination of what can
only be described as an intricate and astonishing web of regulation.1
The simple Mycoplasma
Mycoplasma pneumoniae bacteria, long considered the simplest prokaryote, can no longer be described thus. It is a
parasitic bacterium that (M. pneumonia causes walking pneumonia) has a reduced genome size. It relies on its host for
certain nutrients that its ancestors apparently were able to manufacture. Thus, it has undergone significant genomic decay.
How Mycoplasma bacteria really work
The authors of a paper in Science endeavored to investigate how a cell actually accomplishes necessary processes using
the most basic subject of study.5 But they ran into a juggernaut of layered information-rich complexity that inspired their
assessment:Together, these findings suggest the presence of a highly structured, multifaceted regulatory machinery, which
is unexpected because bacteria with small genomes contain relatively few transcription factors revealing that there is no
such a thing as a simple bacterium.5Specifically, evolutionists Ochman and Raghavan cited research that found in many
cases the sense strand of protein-coding genes is transcribed, the complementary or anti-sense strand is also transcribed.
The resulting sense mRNA is eventually translated to protein, and the resulting antisense mRNA binds to the sense mRNA
to make a double stranded RNA. This slows its path toward translation, and is thus an important speed regulator. This was
previously only known to occur in eukaryotes.
Mycoplasma cells have eukaryotic complexity
In other experiments, different environmental growth conditions caused different lengths and segments of genomic DNA to
become transcribed. This implies a suite of chemical communication cascades from the cell wall inward, as well as the
ability to make alternate products from one gene. This, too, was a surprise, only known in eukaryotes.Like eukaryotic cells,
these simplest among prokaryotes have multifunctional proteins which can be used in different metabolic pathways as
backup machines. Other data strongly suggests that newly manufactured proteins can be altered by other cellular
machinery. Termed post-translational modification, this was taught dogmatically in my 1998 graduate biochemistry courses
as exclusive to eukaryotes.Also shocking was the discovery that over 90% of Mycoplasma proteins are involved in protein
complexes, again like eukaryotes. Another genome-wide survey found indirect evidence of tight gene expression regulation,
but nobody yet knows the mechanism for it.They finally argue that because Mycoplasma is still alive even after such
reduction in quality and quantity of its genome, it must have an underlying eukaryote-like cellular organization replete with
intricate regulatory networks and innovative pathways.5
Where did Mycoplasma get all this in the first place?
These authors then bravely ask, How did these remarkable layers of gene regulation and the highly promiscuous
[multifunctional] behavior of proteins in M. pneumoniae arise?4 But they instead explain that: the reduced efficacy of
selection that operates on the genomes of host-dependent bacteria reductions in long-term effective population size [from
the bottleneck that occurred when the bacteria first became host-dependent, and] the accumulation and fixation of
deleterious mutations in seemingly beneficial genes due to genetic drift, [together cause a] reducing genome size. 5If
selection, bottlenecks, and mutations only reduced the genome, then these processes are no help at all. What in nature
expanded the genome with ingeniously useful data that the remarkably robust yet genomically
truncated Mycoplasma retains plenty of?
Conclusion
At every level, scientists have uncovered more information. That information takes the form of three-dimensional shapes,
electronic and charge configurations, as well as raw coding sequence information. Communication pathways, routines and
subroutines, prioritizing, quality control, and process regulation plans are all stunningly effective and strikingly small.More indepth knowledge of these fantastically complicated cell features demands greater faith from naturalists in the belief that laws
of chemistry built cells. The more informational structures that are found, the greater the gap between the organization in
living system parts and the disorganization found in nonliving chemicals.A reminder of some inferences about information
would seem appropriate here. First, wherever precise regulation of processes due to expertly engineered machines and
codes are seen coming into existence, they always comes from persons. Stated negatively, these machines and codes are
never observed to originate from natural laws. Therefore, it is most parsimonious to infer that wherever similar machines,
processes, and codes are found, they, too, were not derived by nature, but instead by a person or persons.Second,
like spoken languages, biological language is irreducibly complex and yet without physical substance. It comes
complete with symbols, meanings for those symbols, and a grammatical structure for their interpretation. Remove any one
of these three fundamental features, and the informational system is lost. Physics has nothing to do with symbols or
grammar, and therefore nothing to do with the origin of life, which cannot exist without its coded information.6
If run-of-the-mill information always comes from a mind, then this cellular information, being extraordinary, came from a
mastermind.

Meta-information
An impossible conundrum for evolution
by Alex Williams
Published: 30 August 2007(GMT+10)
Cell division, once thought of as a fairly simple thing, is now
known to be an incredibly complex, orchestrated affair that
shouts
intelligent
design.
New genetic information?
Evolutionists have never been able to give a satisfactory answer
to the problem of where the new information comes from that
evolution requires for turning a microbe into a myxomycete or a
maze-mastering mammal. Their best guess is gene duplication
(which gives them an extra length of DNA, but it contains no new
information) followed by random mutations that are supposed to
turn the duplicated information into something new and
useful.They have no direct experimental evidence for this claim
(and there is much against it 1), so they have to rely on indirect
evidence such as the so-called gene families. Some genes are
similar in both structure and function to other genes, and
evolutionists point to these and say they originated by chance
copying and mutation from some common ancestral gene. But
this is just evolutionary speculation; it is not experimental
evidence.The globin gene family is a favourite example.
Hemoglobin carries oxygen in our blood and can be made up of
different combinations of different kinds of globin proteins. For
example, hemoglobin in human fetal blood contains a different
combination of globins to that in post-natal blood. Evolutionists claim this resulted from an original globin molecule that
duplicated in an early blood-using animal and mutated to form a family of different kinds of globins, which then allowed the
diversification of complexity in oxygen-using processes that we see in the animal world today. 2But this example is far better
explained by intelligent design.3 The human baby in his or her mothers womb has to compete for the oxygen in its mothers
blood supply with the demand for oxygen from two other sources: the placenta that feeds it, and from the mothers womb
that surrounds it. So the fetal hemoglobin has to have, amongst other things, a higher affinity for oxygen than the mothers
hemoglobin. In contrast, when the baby is born and can draw oxygen from the air in its own lungs, it no longer has any
competition so it requires a different kind of oxygen-uptake system. A wise designer would ensure that the hemoglobin could
change its form and function to cater for these very different conditions, and integrate this change into the other vast
complexities of the almost miraculous reproductive process. The idea that such complex interactive changes could all occur
by chance is rather hard to accept.
Information about Information
But the problem of information origin in biology is far bigger than most people realize. Information by itself is useless unless
the cell knows how to use it. Evolution not only requires new information, it also requires extra new information about how to
use that new information.Information about information is called meta-information. We can see how it works in making a
cake. If you want to make a cake, you need a recipe that contains: (a) a list of ingredients, and (b) instructions on how to mix
and cook the ingredients to produce the desired outcome. The list of ingredients is the primary information, and the
instructions on what to do with the ingredients is the meta-information.The human genome contains an enormous amount of
information, far more than we ever (until recently) imagined. 4 But we now know that most of it is not primary information
(protein-coding genes) but meta-informationthe information that cells need to have in order to turn those protein-coding
genes into a functional human being and maintain and reproduce that functional being. This meta-information is stored and
used in a variety of ways:DNA consists of a double-helixtwo long-chain molecules twisted around one another. Each
strand consists of a chain of four different kinds of nucleotide molecules (the shorthand symbols are T, A, G and C). About
3% of this in humans consists of protein-coding genes and the other 97% appears to be regulatory meta-information.DNA is
an information-storage molecule, like a closed book. This stored information is put to use by being copied onto RNA
molecules, and the RNA molecules put the DNA information into action in the cell. For every molecule of protein-producing
RNA (primary information), there are about 50 molecules of regulatory RNA (meta-information).Down the sides of the DNA
double-helix, several different kinds of chemical chains are attached in patterns that code meta-information for turning
unspecialized embryonic stem cells into the specialized cells that are needed in fingers, feet, toenails and tail-bones
etc.DNA is a very long thin molecule. If we unwound one set of human chromosomes, the DNA would be about 2 metres
long. To pack it up into the very tiny nucleus inside the very tiny human cell, it is coiled up in four different levels of chromatin
structure into 46 chromosomes. This coiling chromatin structure also contains yet further levels of meta-information . The
first level (the histone code) codes information about the cells history (i.e. it is a cell memory). 5,6 The three further levels of
coiling code further information, some of which is described below, and there is no doubt more that we have yet to
unravel.The amount of meta-information in the human genome is thus truly enormous compared with the amount of primary
gene-coding information.
Self-replicating molecules?
In his monumental work, The Ancestors Tale,7 Richard Dawkins traced the supposed ancestry of humanity back through all
the evolutionary ages to the very first supposed common ancestor of all life. He supposed this original ancestor to have
been an RNA-type of life form, although he admitted ignorance of the precise details.8 His choice of an original RNA life form
is well-founded because RNA is the only known molecule that can do all of the three basic functions of life: (a) store coded
information, (b) combine with itself and other RNAs to create molecular machines, and (c) self-replicate (but only in a very
limited manner under very special circumstances).However, recent studies showing how living cells actually replicate have
made this RNA world concept ludicrously unrealistic.A central problem in cell division (that is, what living cells actually do,
as opposed to Dawkins imagined self-replication) is that a large proportion of the whole genome is required for the normal
operation of the cellprobably at least 50% in unspecialized body cells and up to 7080% in complex liver and brain cells.
When it comes time for a cell to divide, not only does the DNA have to continue to sustain normal cell operations, it also has
to sustain the extra activity associated with cell division.

This creates a huge logistic problemhow to avoid clashes between the transcription machinery (which needs to
continually copy information for ongoing use in the cell) and the replication machinery (which needs to unzip the whole of the
DNA double-helix and replicate a zipped copy back onto each of the separated strands).The cells solution to this logistics
nightmare is truly astonishing.9 Replication does not begin at any one point, but at thousands of different points. But of these
thousands of potential start points, only a subset are used in any one cell cycledifferent subsets are used at different times
and places. Can you see how this might solve the logistics problem?A full understanding is yet to emerge because the
system is so complex; however, some progress has been made:The large set of potential replication start sites is not
essential, but optional. In early embryogenesis, for example, before any transcription begins, the whole genome replicates
numerous times without any reference to the special set of potential start sites.The pattern of replication in the late embryo
and adult is tissue-specific. This suggests that cells in a particular tissue cooperate by coordinating replication so that while
part of the DNA in one cell is being replicated, the corresponding part in a neighbouring cell is being transcribed. Transcripts
can thus be shared so that normal functions can be maintained throughout the tissue while different parts of the DNA are
being replicated.DNA that is transcribed early in the cell division cycle is also replicated in the early stage (but the
transcription and replication machines are carefully kept apart). The early transcribed DNA is that which is needed most
often in cell function. The correlation between transcription and replication in this early phase allows the cell to minimize the
downtime in transcription of the most urgent supplies while replication takes place.There is a pecking order of control.
Preparation for replication may take place at thousands of different locations, but once replication does begin at a particular
site, it suppresses replication at nearby sites so that only one copy of the DNA is made. If transcription happens to occur
nearby, replication is suppressed until transcription is completed. This clearly demonstrates that keeping the cell alive and
functioning properly takes precedence over cell division.There is a built-in error correction system called the cell-cycle
checkpoints. If replication proceeds without any problems, correction is not needed. However, if too many replication events
occur at once the potential for conflict between transcription and regulation increases, and/or it may indicate that some
replicators have stalled because of errors. Once the threshold number is exceeded, the checkpoint system is activated, the
whole process is slowed down, and errors are corrected. If too much damage occurs, the daughter cells will be mutant, or
the cells self-destruct mechanism (the apoptosome) will be activated to dismantle the cell and recycle its components.
An obvious benefit of the pattern of replication initiation being never the same from one cell division to the next is that it
minimizes the effect of any errors that are not corrected.
The impossible conundrum
Now comes the impossible conundrum. Keeping in mind the cake analogy, lets recall that the vast majority of information in
humans is not ingredient-level information (code for proteins) but meta-informationinstructions for using the ingredients to
make, maintain and reproduce functional human beings.Evolutionists say that all this information arose by random
mutations, but this is not possible. Random events are, by definition, independent of one another. But meta-information is,
by definition, totally dependent upon the information to which it relates. It would be quite non-sensical to take the cooking
instructions for making a cake and apply them to the assembly of, say, a childs plastic toy (if nothing else, the baking stage
would reduce the toy to a mangled mess). Cake-cooking instructions only have meaning when applied to cake-making
ingredients. So too, the logistics solution to the cell division problem is only relevant to the problem of cell division. If we
applied the logistics solution to the problem of mate attraction via pheromones (scent) in moths it would not work. All the
vast amount of meta-information in human beings only has meaning when applied to the gene content of the human
genome.Even if we granted that the first biological information came into existence by a random process, the metainformation needed to use that information could not come into existence by the same random (independent) process
because meta-information is inextricably dependentupon the information that it relates to.There is thus no possible random
(mutation) solution to this conundrum. Can natural selection save the day? No. There are at least 100 (and probably many
more) bits of meta-information in the human genome for every bit of primary (protein-coding gene) information. An organism
that has to manufacture, maintain, and drag around with it a mountain of useless information while waiting for a chance
correlation of relevance to occur so that something useful can happen, is an organism that natural selection is going
to select against, not favour! Moreover, an organism that can survive long enough to accumulate a mountain of useless
information is an organism that does not need useless informationit must already have all the information it needs to
survive!What kind of organism already has all the information it needs to survive? There is only one answeran organism
that was designed in the beginning with all that it needs to survive.
Splicing and dicing the human genome
Scientists begin to unravel the splicing code
by Robert W. Carter
Published: 1 July 2010(GMT+10)
What separates the genomes of simple organisms like sea anemones and jellyfish from
humans? Humans have approximately the same number of protein coding genes as these
lowly creatures,1 yet we are much more complex organisms. Ignoring the spiritual aspects
of humanity, this complexity difference must be coded within our genomes, but where?
Since we share many genes with many simpler organisms, the answer does not lie in
gene content alone. Rather, the differences are in the non-coding portions of the genome
(the so-called junk DNA2) and in the way the genes are used to create proteins.Several
decades ago, the one gene-one enzyme hypothesis was in vogue. It seemed
straightforward that a single protein gene coded for a single protein. In prokaryotic organisms (bacteria), this was easy to
show. The known bacterial genes had a defined starting and stopping place and the DNA letters in between spelled out a
discrete amino acid sequence. The eukaryotes (organisms with a nucleus; everything from yeast, to plants, to humans) do
not have a simple gene structure. Our protein genes are broken up into a series of exons (the parts that code for protein)
and introns (non-coding intervening sequences). To make a protein, the gene is first transcribed into RNA, then the introns
are spliced out, the exons are stitched together, and the remainder is translated into protein. Even though complex, the one
gene-one enzyme hypothesis was still applied to eukaryotic protein genes.Over time, however, it was realized that life was
not so simple, especially for the eukaryotes. The one gene-one enzyme hypothesis was particularly troubling for the higher
(more complex) eukaryotes. For example, the approximately 20,000-25,000 protein-coding genes in the human
genome3 are used to create 100,000-300,000 distinct proteins (the actual number is uncertain). The low number of genes in
the human genome was troubling for several reasons. 4 First, this means that we did not have that many more genes than
organisms much simpler than us. Second, we needed a way to create many proteins from few genes and nobody knew how
this could be done on such a large scale. And third, the complexity of the genomic computer program ratcheted up to even
more uncomfortable levels for those who thought we arose through random chance.Even before Human Genome

Project5 was complete, we knew that some proteins are manufactured through a process called alternate splicing, where
exons from different locations in the genome are combined to create many different proteins. From the ENCODE
project,6 we learned that alternate splicing is so pervasive that the definition of the word gene is currently under
debate.7 Thus, the one gene-one enzyme hypothesis turned out to be a gross oversimplification. However, the word and the
concept of a gene is so useful that for the rest of this article I will be referring to genes in the classic sense as a
contiguous stretch of DNA with a starting and ending location and a set of introns and exons that could potentially be
transcribed, spliced, and translated into a single protein. Each gene, however is made of parts that can be recombined with
parts from other genes in different locations in the genome to create proteins not coded by any specific gene.Alternate
splicing is a brilliant design concept that allows for a streamlined genetic program that takes up a fraction of the space
compared to a program that coded for each protein independently. But this added complexity comes at a price. It has been
conservatively estimated that each intron adds the same amount of complexity as approximately 30 additional DNA
letters.8 Thus, the mutation target for a gene is increased for each intron added. Consider that the average protein-coding
gene has 7-10 introns and that the total length of introns is often longer than the total length of protein coding DNA, and one
can see why this is a problem. It takes a lot to maintain such a system and the complexity makes it difficult for naturalistic
theories of origins. In fact, a sizeable proportion of human genetic disease has been attributed to mutations within intronexon splice sites.9 Introns are typically included in the junk DNA category, but they have specific sequences at the head and
tail ends that tell the splicing mechanism where to cut, etc., so they are not without function. (Exons also have splice signals
at their ends. Thus, some of the information for splicing out the introns is found within the protein-coding portion of the
genome. The protein-coding sections code for both protein sequence and splicing patterns at the same time!)The ENCODE
project made the significant discovery that nearly all of the genome was turned into RNA at some point in the life of a cell
and that multiple overlapping RNAs were often created from the same stretch of DNA. This was a tremendous blow to junk
DNA theorists.10However, perhaps more importantly, the ENCODE results also documented an amazing amount of alternate
splicing. So, here we were, knowing that a huge portion of the genome is active and that the protein-coding portions were
being used in complex combinations, but we still did not know how it all came together. Because of this, scientists have
been looking for a splicing code within the genome that controls the slicing and dicing of the protein genes. This splicing
code must account for 1) the complex combinations of exons needed to create hundreds of thousands of proteins from tens
of thousands of protein genes, 2) the variation in splicing from cell to cell needed to account for the different proteins
expressed in different cell types, and 3) changes in splicing patterns over time as the organism proceeds from fertilized egg
to adult (since not all genes are active at all stages in the life cycle). All this information must be coded in the genome, but it
also cannot interfere with the protein-coding domains. Thus, most of this information must reside within the introns and in
the spaces between genes.A paper recently appeared in Nature where the authors claimed to have discovered the
beginning of the splicing code. What they found is a marvel of complexity. Science labs across the world have been
generating tremendous amounts of data and they were able to capitalize on this new knowledge in a massive data mining
exercise. Specifically, vast databases have been compiled that tell us which genes are active in different cell lines and at
different stages of development. We also know of many DNA-binding factors and their specific sequence targets (usually a
short string of very precise letters that are targeted by proteins with whimsical names like Star, Nova, and Quaking-like).
With this knowledge, they were able to approach the issue statistically to document significant features that help to control
alternate splicing. They found many motifs (short DNA words of 5-10 letters each) before and after many exons that were
strongly associated with different cell types. In all, they could explain 60% of the alternate splicing patterns found in the
human genome just by the presence or absence of these motifs. Many of the motifs were known previously and are sites for
known DNA-binding proteins. Many other motifs were new to science.The median number of tissue-specific motifs
associated with splicing, per exon, ranged from 12 for the central nervous system and 19 for embryo. 11 There were
additional tissue-independent features associated with most or all exons and additional and abundant short motifs that were
not considered in the above counts. This means the splicing code is complex and that complex combinations of instructions
are needed to control how the many exons combine to produce the multitude of proteins found in the human body.They also
discovered features related to splicing much farther away from the protein-coding regions than they expected. Because of
technical limitations, most studies on transcription regulation have historically focused on a few dozen letters immediately
upstream or downstream of a target sequence. Here, they document features much further into non-coding regions than
previously known (up to 300 letters away). Thus, even more junk DNA has been subsumed into the functional DNA
category!But this is only the beginning. They have only scratched the surface and have already discovered amazing
complexity. They only managed a prediction accuracy of 60%. Therefore, much remains to be discovered. Where is the
missing information? Perhaps it will be found deeper into the non-coding DNA. Perhaps, because they did not consider the
3-D architecture of the DNA within the nucleus, additional features may be discovered much farther away or even on
different chromosomes! The possibilities are endless and we will certainly update you as more is learned.There is one final
implication of this work I would like to discuss. There are many pseudogenes in the genome that look like functional genes
but have mutations that prevent them from being turned into proteins. The presence of pseudogenes has been an enigma
since their discovery, but the idea has generally been used to attack creationists and other advocates of design. I believe the
arguments are spurious12 and we have written much about them in prior articles. 13 Even though functions have been found
for many pseudogenes, it is true that, if transcribed and spliced, a pseudogene cannot be translated into a protein. However,
now that we are aware of alternate splicing, future work may show that many of the pseudogene exons are incorporated into
functional proteins. If so, the entire pseudogene argument will collapse like a house of cards. But, only time will tell.For now,
let us be amazed at the amazingly engineered human genome. The genetic computer program is, to date, unsurpassed by
any human technology. The wisdom and foresight that went into it is nothing short of stunning. He engineered a string of
DNA as long as a person is tall that could withstand thousands of errors (mutations), adapt to changing environments
(through self-modifying code that turns different genes on and off, depending on conditions), and that can be packed into a
microscopic cell without forming knots! Now we learn that his program is a wonder of data compression and efficiency. It is
more sophisticated than anything we have ever contemplated.
Genetics: no friend of evolution
A highly qualified biologist tells it like it is.
by Lane Lester
Genetics and evolution have been enemies from the beginning of both concepts. Gregor Mendel, the father of genetics, and
Charles Darwin, the father of modern evolution, were contemporaries. At the same time that Darwin was claiming that
creatures could change into other creatures, Mendel was showing that even individual characteristics remain constant.
While Darwins ideas were based on erroneous and untested ideas about inheritance, Mendels conclusions were based on
careful experimentation. Only by ignoring the total implications of modern genetics has it been possible to maintain the

fiction of evolution.To help us develop a new biology based on creation rather than evolution, let us sample some of the
evidence from genetics, arranged under the four sources of variation: environment, recombination, mutation, and creation.
Environment
This refers to all of the external factors which influence a creature during its lifetime. For example, one person may have
darker skin than another simply because she is exposed to more sunshine. Or another may have larger muscles because
he exercises more. Such environmentally-caused variations generally have no importance to the history of life, because they
cease to exist when their owners die; they are not passed on. In the middle 1800s, some scientists believed that variations
caused by the environment could be inherited. Charles Darwin accepted this fallacy, and it no doubt made it easier for him to
believe that one creature could change into another. He thus explained the origin of the giraffes long neck in part through
the inherited effects of the increased use of parts. 1 In seasons of limited food supply, Darwin reasoned, giraffes would
stretch their necks for the high leaves, supposedly resulting in longer necks being passed on to their offspring.
Recombination
This involves shuffling the genes and is the reason that children resemble their parents very closely but are not exactly like
either one. The discovery of the principles of recombination was Gregor Mendels great contribution to the science of
genetics. Mendel showed that while traits might be hidden for a generation they were not usually lost, and when new traits
appeared it was because their genetic factors had been there all along. Recombination makes it possible for there to be
limited variation within the created kinds. But it is limited because virtually all of the variations are produced by a reshuffling
of the genes that are already there.For example, from 1800, plant breeders sought to increase the sugar content of the
sugar beet. And they were very successful. Over some 75 years of selective breeding it was possible to increase the sugar
content from 6% to 17%. But there the improvement stopped, and further selection did not increase the sugar content. Why?
Because all of the genes for sugar production had been gathered into a single variety and no further increase was possible.
Among the creatures Darwin observed on the Galpagos islands were a group of land birds, the finches. In this single
group, we can see wide variation in appearance and in life-style. Darwin provided what I believe to be an essentially correct
interpretation of how the finches came to be the way they are. A few individuals were probably blown to the islands from the
South American mainland, and todays finches are descendants of those pioneers. However, while Darwin saw the finches
as an example of evolution, we can now recognize them merely as the result of recombination within a single created kind.
The pioneer finches brought with them enough genetic variability to be sorted out into the varieties we see today.2
Mutation
Now to consider the third source of variation, mutation. Mutations
are mistakes in the genetic copying process. Each living cell has
intricate molecular machinery designed for accurately copying
DNA, the genetic molecule. But as in other copying processes
mistakes do occur, although not very often. Once
in every 10,000100,000 copies, a gene will
Photo by Ken Ham
contain a mistake. The cell has machinery for
correcting these mistakes, but some mutations
still slip through. What kinds of changes are
produced by mutations? Some have no effect at
all, or produce so small a change that they have
In a fallen world, predators like this tiger, by culling the
no appreciable effect on the creature. But many
more defective animals, may serve to slow genetic
mutations have a significant effect on their
deterioration by screening out the effects of mutation.
owners.
Based on the creation model, what kind of effect
would we expect from random mutations, from genetic mistakes? We would expect virtually all of those
which make a difference to be harmful, to make the creatures that possess them less successful than
before. And this prediction is borne out most convincingly. Some examples help to illustrate
The naked rooster
this.Geneticists began breeding the fruit fly, Drosophila melanogaster, soon after the turn of the century,
mutationno feathers
and since 1910 when the first mutation was reported, some 3,000 mutations have been identified. 3 All of
are produced. Such
the mutations are harmful or harmless; none of them produce a more successful fruit flyexactly as
mutational
defects
predicted by the creation model.Is there, then, no such thing as a beneficial mutation? Yes, there is. A
may
rarely
be
beneficial mutation is simply one that makes it possible for its possessors to contribute more offspring to
beneficial (e.g. if a
future generations than do those creatures that lack the mutation.Darwin called attention to wingless
breeder were to select
beetles on the island of Madeira. For a beetle living on a windy island, wings can be a definite
this type to prevent
disadvantage, because creatures in flight are more likely to be blown into the sea. Mutations producing the
having to pluck preloss of flight could be helpful. The sightless cave fish would be similar. Eyes are quite vulnerable to injury,
roasting?) but never
and a creature that lives in pitch dark would benefit from mutations that would replace the eye with scaradd anything new.
like tissue, reducing that vulnerability. In the world of light, having no eyes would be a terrible handicap,
There is no mutation
but is no disadvantage in a dark cave. While these mutations produce a drastic and beneficial change, it is
which
shows
how
important to notice that they always involve loss of information and never gain. One never observes the
feathers or anything
reverse occurring, namely wings or eyes being produced on creatures which never had the information to
similar arose.
produce them.
Natural selection is the obvious fact that some varieties of creatures are going to be more successful than
others, and so they will contribute more offspring to future generations. A favourite example of natural
section is the peppered moth of England,Biston betularia. As far as anyone knows, this moth has always existed in two basic
varieties, speckled and solid black. In pre-industrial England, many of the tree trunks were light in colour. This provided a
camouflage for the speckled variety, and the birds tended to prey more heavily on the black variety. Moth collections showed
many more speckled than black ones. When the Industrial Age came to England, pollution darkened the tree trunks, so the
black variety was hidden, and the speckled variety was conspicuous. Soon there were many more black moths than
speckled [Ed. note: see Goodbye, peppered moths for more information].As populations encounter changing environments,
such as that described above or as the result of migration into a new area, natural selection favours the combinations of
traits which will make the creature more successful in its new environment. This might be considered as the positive role of
natural selection. The negative role of natural selection is seen in eliminating or minimizing harmful mutations when they
occur.
Creation

The first three sources of variation are woefully inadequate to account for the diversity of life we see on earth today. An
essential feature of the creation model is the placement of considerable genetic variety in each created kind at the
beginning. Only thus can we explain the possible origin of horses, donkeys, and zebras from the same kind; of lions, tigers,
and leopards from the same kind; of some 118 varieties of the domestic dog, as well as jackals, wolves and coyotes from
the same kind. As each kind obeyed the designer`s command to be fruitful and multiply, the chance processes of
recombination and the more purposeful process of natural selection caused each kind to subdivide into the vast array we
now see.
Astonishing DNA complexity uncovered
by Alex Williams
Because of evolutionary notions of our origin, our DNA was supposed to be
mostly junk, leftovers of our animal ancestry. This has proven to be yet
another evolutionary impediment to scientific progress. Photo sxc.hu
Published: 20 June 2007(GMT+10)
When the Human Genome Project published its first draft of the human
genome in 2003, they already knew certain things in advance. These included:
Coding segments (genes that coded for proteins) were a minor component of
the total amount of DNA in each cell. It was embarrassing to find that we have
only about as many genes as mice (about 25,000) which constitute only about
3% of the entire genome.The non-coding sections (i.e. the remaining 97%)
were nearly all of unknown function. Many called it junk DNA; they thought it
was the miscopied and mutation-riddled left-overs abandoned by our ancestors
over millions of years. Molecular taxonomists routinely use this junk DNA as a
molecular clocka silent record of mutations that have been undisturbed by natural selection for millions of years because
it does not do anything. They have constructed elaborate evolutionary histories for all different kinds of life from it.Genes
were known to be functional segments of DNA (exons) interspersed with non-functional segments (introns) of unknown
purpose. When the gene is copied (transcribed into RNA) and then translated into protein the introns are spliced out and the
exons are joined up to produce the functional gene.Copying (transcription) of the gene began at a specially marked START
position, and ended at a special STOP sign.Gene switches (the molecules involved are collectively called transcription
factors) were located on the chromosome adjacent to the START end of the gene.Transcription proceeds one way, from the
START end to the STOP end.Genes were scattered throughout the chromosomes, somewhat like beads on a string,
although some areas were gene-rich and others gene-poor.DNA is a double helix molecule, somewhat like a coiled zipper.
Each strand of the DNA zipper is the complement of the otheras on a clothing zipper, one side has a lump that fits into a
cavity on the other strand. Only one side of the DNA zipper (called the sense strand) makes the correct protein sequence.
The complementary strand is called the anti-sense strand. The sense strand is like an electrical extension cord where the
female end is safe to leave open until an appliance is attached, but the protruding male end is active and for safetys sake
only works when plugged into a female socket. Thus, protein production usually only comes from copying the sense strand,
not the anti-sense strand. The anti-sense strand provides a template for copying the sense strand in a way that a
photographic negative is used to produce a positive print. Some exceptions to this rule were known (i.e. that in some cases
anti-sense strands were used to make protein) but no one expected the whole anti-sense strand to be transcribed.This
whole structure of understanding has now been turned on its head. A project called ENCODE recently reported an intensive
study of the transcripts (copies of RNA produced from the DNA) of just 1% of the human genome. 1,2 Their findings include
the following inferences:About 93% of the genome is transcribed (not 3%, as expected). Further study with more wideranging methods may raise this figure to 100%. Because much energy and coordination is required for transcription this
means that probably the whole genome is used by the cell and there is no such thing as junk DNA.Exons are not genespecific but are modules that can be joined to many different RNA transcripts. One exon (i.e. one part of one gene) can be
used in combination with up to 33 different genes located on 14 different chromosomes. This means that one exon can
specify one part shared in common by many different proteins.There is no beads on a string linear arrangement of genes,
but rather an interleaved structure of overlapping segments, with typically 5, 7, 9 or more transcripts coming from the one
gene.
Not just one strand, but both strands (sense and anti-sense) of the DNA are fully transcribed.
Transcription proceeds not just one way but both backwards and forwards.
Transcription factors can be tens or hundreds of thousands of base-pairs away from the gene that they control, even on
different chromosomes.
There is not just one START site, but many, in each particular gene region.
There is not just one transcription triggering (switching) system for each region, but many.
The authors conclude:
An interleaved genomic organization poses important mechanistic challenges for the cell. One involves the [use of] the
same DNA molecules for multiple functions. The overlap of functionally important sequence motifs must be resolved in time
and space for this organization to work properly. Another challenge is the need to compartmentalize RNA or mask RNAs that
could potentially form long double-stranded regions, to prevent RNA-RNA interactions that could prompt apoptosis
[programmed cell death].This concern for the safety of so many RNA molecules being produced in such a small space is
well-founded. RNA is a long single-strand molecule not unlike a long piece of sticky-tapeit will stick to any nearby surface,
including itself! Unless properly coordinated, it will all scrunch up into a sticky mess.These results are so astonishing, so
shocking, that it is going to take an awful lot more work to untangle what is really going on in cells. And the molecular
taxonomists, who have been drawing up evolutionary histories (phylogenies) for everything, are going to have to undo all
their years of junk DNA-based historical reconstructions and wait for the full implications to emerge before they try again.
One of the supposedly knock-down arguments that humans have a common ancestor with chimpanzees is shared nonfunctional DNA coding. That argument just got thrown out the window.

Astonishing DNA complexity update


by Alex Williams

Published: 3 July 2007 (GMT+10)


Recently we reported astonishing new discoveries about the complexity of the
information content stored in the DNA molecule. 1Notably, the 97% of the human DNA
that does not code for protein is not leftover junk DNA from our evolutionary past, as
previously thought, but is virtually all being actively used right now in our cells.Here
are a few more exciting details from the ENCODE (Encyclopedia of DNA Elements)
pilot project report.2 As a help in understanding this, DNA is a very stable molecule
ideal for storing information. In contrast, RNA is a very active (and unstable) molecule
and does lots of work in our cells. To use the stored information on our DNA, our cells
copy the information onto RNA transcripts that then do the work as instructed by that
information.Traditional beads-on-a-string type genes do form the basis of the proteinproducing code, even though much greater complexity has now been uncovered.
Genes found in the ENCODE project differ only about 2% from the existing catalogue of known protein-coding genes.We
reported previously that the transcripts overlap the gene regions, but the overlaps are huge compared to the size of the
genes. On average, the transcripts are 10 to 50 times the size of the gene region, overlapping on both sides. And as many
as 20% of transcripts range up to more than 100 times the size of the gene region. This would be like photocopying a page
in a book and having to get information from 10, 50 or even 100 other pages in order to use the information on that
page.The untranslated regions (now called UTRs, rather than junk) are farmore important than the translated regions (the
genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average
in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping
and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the junk
about 50 times more activethan the genes.Transcription activity can best be predicted by just one factor, the way that the
DNA is packaged into chromosomes. The DNA is coiled around protein globules called histones, then coiled again into a
rope-like structure, then super-coiled in two stages around scaffold proteins to produce the thick chromosomes that we see
under the microscope. This suggests that DNA information normally exists in a form similar to a closed bookall the coiling
prevents the coded information from coming into contact with the translation machinery. When the cell wants some
information it opens a particular page, photocopies the information, then closes the book again. Recent other work 3 shows
that this is physically accomplished as follows:The chromosomes in each cell are stored in the membrane-bound nucleus.
The nuclear membrane has about 2000 pores in it, through which molecules can be passed in and out. The required
chromosome is brought near to one of these nuclear pores.
The section of DNA to be transcribed is placed in front of the pore.
The supercoil is unwound to expose the transcription region.
The histone coils are twisted so as to expose the required copying site.
The double-helix of the DNA is unzipped to expose the coded information.
The DNA is grasped into a loop by the enzymes that do the copying, and this loop is copied onto an RNA transcript. The
transcript is then checked for accuracy (and is degraded and recycled if it is faulty). The RNA transcript is then specially
tagged for export, and is exported through the pore and carried to wherever it is needed in the cell.The book of DNA
information is then closed by a reversal of the coiling process and movement of the chromosome away from the nuclear
pore region.The most surprising result, according to the ENCODE authors, is that 95% of the functional transcripts (genic
and UTR transcripts with at least one known function) show no sign of selection pressure (i.e. they are not noticeably
conserved and are mutating at the average rate). This contradicts Charles Darwins theory that natural selection is the major
cause of our evolution. It also creates an interesting paradox: cell architecture, machinery and metabolic cycles are all highly
conserved (e.g. the human insulin gene has been put into bacteria to produce human insulin on an industrial scale), while
most of the chromosomal information is freely mutating. How could this state of affairs be maintained for the supposed 3.8
billion years since bacteria first evolved? A better answer might be that life is only thousands, not billions of years old. It also
looks like cells, not genes, are in control of lifethe direct opposite of what neo-Darwinists have long assumed.
Evidence for the design of life: part 1Genetic redundancy
by Peter Borger
Knockout strategies have demonstrated that the function of many genes cannot be studied by disrupting them in model
organisms because the inactivation of these genes does not lead to a phenotypic effect. For living systems, this peculiar
phenomenon of genetic redundancy seems to
be the rule rather than the exception. Genetic
redundancy is now defined as the situation in
which the disruption of a gene is selectively
neutral. Biology shows us that 1) two or more
genes in an organism can often substitute for
each other, 2) some genes are just there in a
silent state. Inactivation of such redundant
genes does not jeopardize the individuals
reproductive success and has no effect on
survival of the species. Genetic redundancy is
the big surprise of modern biology. Because
there is no association between redundant
genes and genetic duplications, and because
redundant genes do not mutate faster than
essential genes, redundancy therefore brings
down more than one pillar of contemporary
evolutionary thinking.

Figure 1. To create a mouse knockout for a particular gene, a selectable marker is integrated in the gene of interest in an
embryonic stem cell. The marker disrupts (knocks out) the gene of interest. The manipulated embryonic stem cell is then

injected into a mouse oocyte and transplanted back into the uterus of pseudo-pregnant mouse. Offspring carrying the
interrupted gene can be sorted out by screening for the presence of the selection marker. It is now fairly easy to obtain
animals in which both copies are interrupted through selective breeding. Mendels law of independent segregation assures
that crossbreeding littermates will produce individuals that lack both genes.The discovery of the primary rules governing
biology in the second half of the 20th century paved the way for a more fundamental understanding of the complexity of life.
One of the spin-offs of this knowledge has been the development of sophisticated techniques to elucidate the function of
proteins. When molecular biologists want to know the function of a particular human protein they genetically modify a
laboratory mouse so that it lacks the corresponding gene (for the laboratory procedure see figure 1). Mice that have both
alleles of a gene interrupted cannot produce the corresponding proteinthey are called knockouts. Theoretically, the
phenotype of a mouse lacking specific genetic information could provide essential information about the function of the
gene. Over the years, thousands of knockouts have been generated. The knockout-strategy has helped elucidate the
functions of hundreds of genes and has contributed immensely to our biological knowledge. However, there has been one
unexpected surprisethe no-phenotype knockout. This is unexpected, because according to the Darwinian paradigm, all
genes should have a selectable advantage. Hence, knockouts should have measurable, detectable phenotypes. The nophenotype knockouts demonstrate that genes can be disrupted withoutor with only minordetectable effects on the
phenotype. Many genes seem to have no measurable function! This is known as genetic redundancy and it is one of the big
surprises of modern biology.
Molecular switches
One of the most intriguing examples of genetic redundancy is found in the SRC gene family. This family comprises a group
of eight genes that code for eight distinct proteins all with a function that is technically known as tyrosine kinase. SRC
proteins attach phosphate groups to other proteins that contain the amino acid tyrosine in a specific amino acid context. The
result of this attachment is that the protein becomes activated; it is switched on, and can hence pass down information in a
signalling cascade. Four closely related members of the family are named SRC, YES, FYN and FGR, and the other related
members are known as BLK, HCK, LCK and LYN. Both families are so-called nuclear receptors, and transmit signals from
the exterior of the cell to the nucleus, the operation centre where the information present in the genes is transcribed into
messenger RNA. The proteins of the SRC gene family operate as molecular switches that regulate growth and
differentiation of cells. When a cell is triggered to proliferate, tyrosine kinase proteins are transiently switched on, and then
immediately switched off.The SRC gene family is among the most notorious genes known to man, since they cause cancer
as a consequence of single point mutations. A point mutation is a change in a DNA sequence that alters only one single
nucleotidea DNA letterof the entire gene. When the point mutation is not on a silent position, it will cause the organisms
protein-making machines to incorporate a wrong amino acid. The consequence of the point mutation is that the organism
now produces a protein that cannot be switched off. Mutated SRC genes are of particular danger because they will
permanently activate signalling cascades that induce cell proliferation: the signal that tells cells to divide is permanently
switched on. The result is uncontrolled proliferation of cellscancer. The growth-promoting point mutations cannot be
overcome by allelic compensation because a normal protein cannot help to switch off the mutated protein.Despite the SRC
protein being expressed in many tissues and cell types, mice in which the SRC gene has been knocked out are still viable.
The only obvious characteristic of the knockout is the absence of two front teeth due to osteoporosis. In contrast, there are
essentially no point mutations allowed in the SRC protein without severe phenotypic consequences. Amino acid changing
point mutations in most, presumably all, of the SRC genes can lead to uncontrolled cellular replication. 1 Knockout mice
models have been generated to reveal the functions of all the members of the SRC gene family. Four out of eight knockouts
did not have a detectable phenotype. Despite their cancer-inducing properties, half of the SRCgenes appear to be
redundant. Standard evolutionary theory tells us that redundant gene family members originated through gene duplications.
Duplicated genes are truly redundant and as such they are expected to reduce to a single functional copy over time through
the accumulation of mutations that damage the duplicated genes. Such mutations can be frame-shift mutations that
introduce premature stop signals, which are recognized by the cellular translation-machines to terminate protein synthesis.
The existence of the SRC gene family has been explained as follows:In the redundant gene family of SRC-like proteins,
many, perhaps almost all point mutations that damage the protein also cause deleterious phenotypes and kill the organism.
The genetic redundancy cannot decay away through the accumulation of point mutations. 1This scenario implies that
the SRC genes are destined to reside in the genome forever. Point mutations that immediately kill raise an intriguing origin
question. If the SRC genes are really so potently harmful that point mutations induce cancer, how could this extended gene
family come into existence through gene duplication and diversify through mutations in the first place? After the first
duplication, neither of the genes is allowed to change because it will invoke a lethal phenotype and kill the organism through
cancer. Amino acid changing mutations in the SRC genes will permanently be selected against. The same holds true for the
third, fourth and additional gene duplication. New gene copies are only allowed to mutate at neutral sites that do not replace
amino acid in the protein. Otherwise the organism will die from tumours. Because of this purifying selection mechanism, the
duplicates should remain as they are. Yet the proteins of the SRC family are distinctly different, only sharing 6080% of their
sequences.
Redundancythe rule not the exception
In 1964, a knockout cross-country skier won two gold medals during the Winter Olympics in Innsbruck. In true Olympic
tradition, Eero Maentyrantas 15 and 30 km success was surrounded by controversy. Tests showed that he had 15% more
red blood cells than normal subjects and Eero was accused of using doping to increase his level of red blood cells. Yet no
trace of blood doping could be found. In 1964 nobody knew, but modern biology showed Maentyranta had a
mutated EPO gene, which codes for erythropoietin, a messenger protein that tells the bone marrow to increase the
production of red blood cells. To increase red blood levels, EPO binds to the EPO receptor that generates two opposite
signals: one to instruct bone marrow cells to become red blood cells (the on-switch) and one to reduce production of red
blood cells (the off-switch). This auto-regulatory mechanism assures a balanced production of red blood cells. In 1993, it
turned out that the Olympic medallist had a mutation that knocked out the off-switch.2 The EPO receptor of the Finnish
athlete generated a normal activation signal, but not the deactivating one. People can do well without the off-switch.In
humans, the muscle-fiber-producing ACTN3 gene can also be missed entirely and without consequences for
fitness.3 Humans can also do without the GULO gene,4 the gene coding for caspase 12,5 the CCR5gene6 and some of
the GST genes that are involved in the detoxification of polycyclic aromatic hydrocarbons present in cigarette smoke. 7 All
these genes can be found inactivated in entire human populations (GULO, caspase 12) or subpopulations thereof. The
Douc Langur (Pygathrix nemaeus), an Asian leaf-eating Colobine monkey, is the natural no-phenotype knockout for
the angiogenin gene that codes a small protein that stimulates the formation of blood vessels. 8 Bacterial genomes can be
reduced by over 9% without selective disadvantages on minimal medium, 9 and mice in which 3 megabases of conserved
DNA was erased showed no signs of reduced survival and there was no indication of overt pathology. 10 Fewer than 2% of
approximately 200 Arabidopsis thaliana (Mouse-Ear Cress) knockouts displayed significant phenotypic alterations. Many of

the knockouts did not affect plant morphology even in the presence of severe physiological defects.11 In the nematode
worm Caenorhabditis elegans a surprising 89% of single-copy and 96% of duplicate genes show no detectable phenotypic
effect when they are knocked out.12 Prion proteins are thought to have a function in learning processes, but when they are
misfolded they can cause bovine spongiform encephalitis (BSE) or KreutzfeldJacob disease. In order to make BSE
resistant cows, a knockout breed has been created lacking the prion protein. A thorough health assessment of this knockout
breed revealed only small differences from wild-type animals. Apparently, cows can thrive very well without the prion
protein.13 Research on histone H1 genes, once believed to be indispensable for DNA condensation, suggest that any
individual H1 subtype is not necessary for mouse development, and that loss of even two subtypes is tolerated if a normal
H1-to-nucleosome stoichiometry is maintained.14 Even complete highly specialized cells can be redundant. A strain of
laboratory mouse, named WBB6F1, lacks a specific type of blood cells known as mast cells. The reported no-phenotype
knockouts are probably only the tip of the iceberg. As reported in Nature below, few knockout organisms in which no
phenotype could be traced ever see the light of day: a lot of those things [no-phenotype knockouts] you dont hear about.
No-phenotype knockouts are negative results, and as such they are usually not reported in scientific journals; because they
do not have news value. To address the problem, the journal Molecular and Cellular Biology has since 1999 a section given
over to knockout and other mutant mice that seem perfectly normal. 15So how are genes, cells and organisms supposed to
have evolved without selective constraints? If organisms can do without complete cells, it would be outlandish to assert that
natural selection was the driving force shaping those cells. Two decades of knockout experiments has made it clear that
genetic redundancy is a major characteristic of all studied life forms.
Paradigm lost
Genetic redundancy falsifies several evolutionary hypotheses. Firstly, truly redundant genes are impossible paradoxes
because natural selection cannot prevent the accumulation of harmful mutations in these genes. Hence, natural selection
cannot prevent redundancies from being lost. Secondly, redundant genes do not evolve (mutate) any faster than essential
genes. If protein evolution is due in large part to neutral and slightly deleterious amino acid substitutions, then the incidence
of such mutations should be greater in proteins that contribute less to individual reproductive success. The rationale for this
prediction is that non-essential proteins should be subject to weaker purifying selection and should accumulate mildly
deleterious substitutions more rapidly. This argument, which was presented over twenty years ago, is fundamental to many
theoretical applications of evolutionary theory, but despite intense scientific scrutiny the prediction has not been confirmed.
In contrast, a systematic analysis of mouse genes has shown that essential genes do not evolve more slowly than nonessential ones.16 Likewise, E. coli proteins that operate in huge redundant networks can tolerate just as many mutations as
unique single-copy proteins,17 and scientists comparing the human and chimpanzee genomes found that non-functional
pseudogenes, which can be considered as redundancies, have similar percentages of nucleotide substitutions as do
essential protein-coding genes.18 Thirdly, as discussed in more detail below, several recent biology studies have provided
evidence that genetic redundancy is not associated with gene duplications.
What does the evolutionary paradigm say?
An important question that needs to be addressed iscan we understand genetic redundancy from Darwins natural
selection perspective? How can genetic redundancy be maintained in the genome without natural selection acting upon it
continually? How did organisms evolve genes that are not subject to natural selection? First, lets look at how it is thought
genetic redundancies arise. Susumo Ohnos influential 1970 book, Evolution by Gene Duplication, deals with this
idea.19 Sometimes, during cell divisions, a gene or longer stretch of biological information is duplicated. If duplication occurs
in germ line cells and become inheritable, the exact same gene may be present twofold in the genome of the offspringa
genetic back-up. Ohno argues that gene and genome duplications are the principal forces that drive the increasing
complexity of Darwinian evolution, referring to the evolution from microbes to microbiologists. He proposes that duplications
of genetic material provide genetic redundancies which are then free to accumulate mutations and adopt novel biological
functions. Duplicated DNA elements are not subject to natural selection and are free to transform into novel genes. With
time, he argues, a duplicated gene will diverge with respect to expression characteristics or function due to accumulated
(point) mutations in the regulatory and coding segments of the duplicate. Duplicates transforming into novel genes with a
selective advantage will certainly be favored by natural selection. Meanwhile, the genetic redundancy will protect old
functions as new ones arise, hence reducing the lethality of mutations. Ohno estimates that for every novel gene to arise
through duplication, about ten redundant copies must join the ranks of functionless DNA base sequence.20 Diversification of
duplicated genetic material is now the accepted standard evolutionary idea on how genomes gain useful information. Ohnos
idea of evolution through duplication also provides an explanation for the no-phenotype knockouts: if genes duplicate fairly
often, it is then reasonable to expect some level of redundancy in most genomes, because duplicates provide an organism
with back-up genes. As long as duplicates do not change too much, they may substitute for each other. If one is lost, or
inactivated, the other one takes over. Hence, Ohnos theory predicts an association between genetic redundancy and gene
duplication.
The evolutionary paradigm is wrong
Figure 2. A very simple scheme of a small robust network comprised of AE,
where several nodes are redundant.
Some biologists have looked into this matter specifically using the wealth of
genetic data available for Saccharomyces cerevisiaethe common bakers
yeast. A surprising 60% ofSaccharomyces genes could be inactivated without
producing a phenotype. In 1999, Winzeler and co-workers reported
in Science that only 9% of the non-essential genes ofSaccharomyces have
sequence similarities with other genes present in the yeasts genome and
could thus be the result of duplication events.21 Most redundant genes
ofSaccharomyces are not related to genes in the yeasts genome, which
suggests that genetic duplications cannot explain genetic redundancy. In 2000,
Andreas Wagner confirmed Winzelers original findings that weak or no-effect
(i.e. non-essential and redundant) genes are no more likely to have paralogous
that is, duplicatedgenes within the yeast genome than genes that do result
in a defined phenotype when they are knocked out. Wagner concluded that the robustness of mutant strains cannot be
caused by gene duplication and redundancy, but is more likely due to the interactions between unrelated genes. 22 More
recent studies have shown that cooperating networks of unrelated genes contribute significantly more to robustness than
gene copy number.23 Redundant genes are proposed to have originated in gene duplications, but we do not find a link
between genetic redundancy, and duplicated genes in the genomes. Gene duplication is not a major contributor to genetic
redundancy, and the robust genetic networks found in organisms cannot be explained. The predicted association between

genetic redundancy and gene duplication is non-existent. Ohnos interesting idea of evolution by gene duplication therefore
cannot be right.
The non-linearity of biology
The no-phenotype knockouts can only be explained by taking into account the non-linearity of biochemical systems. It is
ironic that standard wall charts of biochemical reactions show hundreds of coupled reactions working together in networks,
while graduate students are tacitly encouraged to think in terms of linear cause and effect. The linear cause-and-effect
thinking in ancient Greek philosophy was adopted by nineteenth century European scholars, and is still dominating most
fields of science, including biology. We cannot understand that genetic redundancy and biological robustness in linear terms
of single causality, where A causes B causes C causes D causes E. Biological systems do not work like that. Biological
systems are designed as redundant scale-free networks. In a scale-free network the distribution of node linkage follows a
power law in that it contains many nodes with a low number of links, few nodes with many links and very few nodes with a
high number of links. A scale-free network is very much like the Golden Orbs web: individual nodes are not essential for
letting the system function as a whole. The internet is another example of a robust scale-free network: the major part of the
websites makes only a few links, a lesser fraction make an intermediate number of links, and a minor part makes the
majority of links. Usually hundreds of routers routinely malfunction on the Internet at any moment, but the network rarely
suffers major disruptions. As many as 80% of randomly selected Internet routers can fail, but the remaining ones will still
form a compact cluster in which there will still be a path between any two nodes. 24 Likewise, we rarely notice the
consequences of thousands of errors that routinely occur in our cells.
Scale free networks
Genes never operate alone but in redundant scale-free networks with an incredible level of buffering capacity. In a simple
non-linear biological systempresented in figure 2with nodes A through E, A may cause B, but A also causes D
independent of B and C. This very simple network of only five nodes demonstrates robustness due to redundancy of B and
C. If A fails to make the link with D, there are still B and C to make the connection. Extended networks composed of
hundreds of interconnected proteins ensure that if one network becomes inactivated by a mutation, essential pathways will
then not be shut down immediately. A network of cooperating proteins that can substitute for or bypass each others
functions makes a biological system robust. It is hard to imagine how selection acts on individual nodes of a scale-free,
redundant system. Complex engineered systems rely on scale-free networks that can incorporate small failures in order to
prevent larger failures. In a sense, cooperating scale-free networks provide systems with an anti-chaos module which is
required for stability and strength. Scale-free genetic and protein networks are an intrinsic, engineered characteristic of
genomes and may explain why genetic redundancy is so widespread among organisms. Genetic networks usually serve to
stabilize and fine-tune the complex regulatory mechanisms of living systems. They control homeostasis, regulate the
maintenance of genomes and provide regulatory feedback on gene expression. An overlap in the functions of proteins also
ensures that a cell does not have to respond with only on or off in a particular biochemical process, but instead may
operate somewhere in between.Most genes in the human genome are involved in regulatory networks that detect and
process information in order to keep the cell informed about its environment. The proteins operating in these networks come
as large gene families with overlapping functions. In a cascade of activation and deactivation of signalling proteins, external
messages are transported to the nucleus with information about what is going on outside so it can respond adequately. If
one of the interactions disappears, this will not immediately disturb the balance of life. The buffering capacity present in
redundant genetic networks also provides the robustness that allows living systems to propagate in time. In a linear system,
one detrimental mutation would immediately disable the system as a whole: the strength of a chain is determined by its
weakest link. Interacting biological networks, where parallel and converging links independently convey the same or similar
information, almost never fail. The Golden Orbs web only crumbles when an entire spoke is obliterated in a crash with a
Dragonfly, an event that will hardly ever happen. Biological systems operate as a spiders web: many interacting and
interwoven nodes produce robust genetic networks and are responsible for genetic redundancy.23
Conclusion
Genetic redundancy is an amazing property of genomes and has only recently become evident as a result of negative
knockout experiments. Protein-coding genes and highly conserved regions can be eliminated from the genome of model
organisms without a detectable effect on fitness. There is no association between redundant genes and gene duplications,
and redundant genes do not mutate faster than essential genes. Genetic redundancy stands as an unequivocal challenge to
the standard evolutionary paradigm, as it questions the importance of Darwins selection mechanism as a major force in the
evolution of genes. It is also important to realize that redundant genes cannot have resided in the genome for millions of
years, because natural selection, a conservative force, cannot prevent their destruction due to debilitating mutations.
Mainstream biologists who are educated in the Darwinian framework are unable to understand the existence of genes
without natural selection. This is clear from a statement in Nature a few years ago by Mario Cappecchi, a pioneer in the
development of knockout technology:I dont believe that there is a single [knockout] mouse that does not have a phenotype.
We just arent asking the right questions.15The right question to be asked is: is the evolutionary paradigm wrong? My
answer is yes, it is. Current naturalistic theories do not explain what scientists observe in the genomes. Genetic redundancy
is the actual key to help us understand the robustness of organisms and also their built-in flexibility to rapidly adapt to
different environments. In part 2 of this series of articles, I will explain genetic redundancy in the context of baranomes, the
multipurpose genomes baramins were originally designed with in order to rapidly spread to all the corners and crevices of
the earth.
The design of life: part 3an introduction to variation-inducing genetic elements
by Peter Borger
The inheritance of traits is determined by genes: long stretches of DNA that are
passed down from generation to generation. Usually, genes consist of a coding part
and a non-coding regulatory part. The coding part of the gene determines the
functional output, whereas the non-coding portion contains switches and units that
determine when, where and how much of the functional output should be generated.
Point-mutations in the coding part are predominantly neutral or slightly detrimental
genetic noise that accumulates in the genome, whereas point-mutations in the
regulatory part of DNA units can induce variation with respect to the amount of output.
Previously, in part 2, I argued that created kinds were frontloaded with baranomes:
that is, pluripotent genomes with an ability to induce variation from within. The output
of (morpho)genetic algorithms present in the baranome can readily be modulated by
variation-inducing genetic elements (VIGEs). VIGEs are frontloaded genetic elements
normally referred to as endogenous retroviruses, insertion sequences, LINEs, SINEs,
micro-satellites, transposons, insertion sequences, and the like. In the present report,

these transposable and repetitive DNA sequences are redefined as VIGEs, which solves the RNA virus paradox. The
(morpho)genetic algorithms were designed in such way that VIGEs easily integrated into it and became a part of it, hence
making the program explicit.The variation that Darwin saw in pigeons can be explained with the activation or deactivation of
existing genetic sequences for feather production in different parts of the body. This gives no basis for asserting that pigeons
could change into something which is not a pigeon.In order to fight off invading bugs and parasites, higher organisms have
an elaborate mechanism that induces variation in immunological defence systems. One particular type of immune cells (B
cells) produces defence proteins known as immunoglobulins. Immunoglobulins are very sticky; they bind to intruders as
biological tags and mark them as alien. Other cells of the immune system then recognize the intruder, and a destruction
cascade is activated. To have a tag available for every possible alien intruder, millions of B cells have their own highly
specific gene for immunoglobulin production. In the genome there is only limited storage space for biological information, so
how can there be millions of genes? Well, there arent. Immunoglobulin genes are assembled from several pre-existing DNA
sequences that can be independently put together. The part of the immunoglobulin that does the alien recognition contains
several domains which are each highly variable. Every single B cell forms a unique immunoglobulin gene by picking from
several short pre-existing DNA sequences. We also observe that later generations of immunoglobulins are more specific
than the earlier generations, in the sense that they bind more tightly to invading microorganisms. Binding affinity to an
invader is equivalent to recognition of that invader. And the better the immune system is able to recognize an intruder, the
better it is able to clear it. The increased specificity is due to somatic mutations deliberately introduced in the genes of the
immunoglobulins. A mechanism to rapidly induce mutations in immunoglobulin genes is present in the B cell genome. This
mechanism ensures that the recognition pattern specified by the genes becomes increasingly specific for the intruder. This
ability to recognize and defeat all potential microorganisms is characteristic of the immune systems of higher organisms,
including humans. The genomes contain all the necessary biological information required to induce variation from within. A
flexible genome is required to effectively ward off diseases and parasitic infections. B cells dont wait for mutations to
happen; they generate the necessary mutations themselves.
Darwin revisited
Previously, in part 2,1 I argued that organisms are equipped with flexible, highly adaptable, pluripotent, multipurpose
genomes. Organisms are able to conquer the world through adaptive radiation of baranomes. But how do baranomes
unleash information? Do organisms have to wait for selectable mutations to occur in order to rapidly invade and occupy
novel ecological niches? Or were the baranomes of created kinds equipped with mechanisms to rapidly induce mutations,
similar to the variation generated by B cells? Lets turn to Darwins The Origin of Species, where we will find some clues.
Darwin wrote quite extensively on variation, and in particular on the variation of feather patterns in pigeons:
Box 1. Common names of some well-known variation-inducing genetic elements (VIGEs) in prokaryotes (bacteria) and
eukaryotes (yeast, plants, insects and mammals).
Some facts in regard to the colouring of pigeons well deserve
consideration. The rock-pigeon is of a slaty-blue, and has a white
rump (the Indian sub-species, C. intermedia of Strickland, having it
bluish); the tail has a terminal dark bar, with the bases of the outer
feathers externally edged with white; the wings have two black
bars; some semi-domestic breeds and some apparently truly wild
breeds have, besides the two black bars, the wings chequered with
black. These several marks do not occur together in any other
species of the whole family. Now, in every one of the domestic
breeds, taking thoroughly well-bred birds, all the above marks, even
to the white edging of the outer tail-feathers, sometimes concur perfectly developed. Moreover, when two birds belonging to
two distinct breeds are crossed, neither of which is blue or has any of the above specified marks, the mongrel offspring are
very apt suddenly to acquire these characters; for instance, I crossed some uniformly white fantails with some uniformly
black barbs, and they produced mottled brown and black birds; these I again crossed together, and one grandchild of the
pure white fantail and pure black barb was of as beautiful a blue colour, with the white rump, double black wing-bar, and
barred and white-edged tail-feathers, as any wild rock pigeon! We can understand these facts, on the well-known principle
of reversion to the ancestral characters, if all the domestic breeds have descended from the rock-pigeon. 2Darwin argues
and correctly sothat all domestic pigeon breeds have descended from the rock-pigeon. He even knew, as demonstrated
above, how to breed the rock-pigeon from several distinct pigeon races following a breeding pattern. Darwin describes
a breeding algorithmfor pigeons, to obtain the ancestor to all pigeons! But does he also describe an algorithm for breeding
turkeys from pigeons? No. Darwin doesnt know such an algorithm. If he had found an algorithm for breeding ducks or
magpies from pigeon genomes, he would have had solid evidence in favour of his proposal On The Origin of Species
Through the Preservation of Favoured Races. His breeding experiments led him to discover the principle of reversion to
ancestral characters, but contrary to common Darwinian wisdom, it is also the falsifying observation to his proposal for
the origin of species. The observation that pigeons bring forth pigeons, and nothing else but pigeons, is not exactly the
evidence needed to argue for the common descent of all birds. On the contrary! Darwins breeding experiments
demonstrated that a pigeon is a pigeon is a pigeon. Characteristics and traits within single species of pigeons may vary
tremendously, but he always started and ended with pigeons. Breeding experiments have always shown, without exception,
that novel and distinct bird species do not arrive through artificial selection. Even Darwin argues that there is no doubt that
all varieties of ducks and rabbits have descended from the common wild duck and rabbit. 3 From the variation Darwin
observed in wild and domesticated populations, it does not follow that rabbits and ducks have some hypothetical common
ancestor in a fuzzy distant past. Darwin observed inborn, innate variation that already existed in the genomes of the pigeons
and it only had to be activated or expressed.From the excerpt above, we may even get an impression of how it works. A
genetic algorithm for making feathers (a feather program) is part of the pigeons genome and is present in every single cell.
The feather program is present in billions of pigeon cells, but it is NOT active in all those cells. Feathers are only formed
when the program is activated. The feather program is silent in cells where it should normally not operate. Activation of the
feather program in the wrong cells may often be incompatible with life, but sometimes it may produce pigeons with
(reversed) feathers on the feet. The program may be derepressed or activated through a mechanism that operates in the
pigeons genome. Whether feathers appear on the feet or on the head, and whether they appear normal or reversed is
merely a matter of activation and regulation of the feather program. But Darwin didnt know about silent genomic programs
or how they could become active. He didnt know about gene regulation and molecular switches. Darwin did not know
anything about genes and genomes.
Analogous variation
The idea that Darwin had been working on for over two decades prior to the publication of Origin, his ide fixe, was how
organic change (i.e. variation) present in populations might explain how novel species came into being. Unchanging, stable

species is not what Darwin had in mind. He pondered the riddles of variation; he thought about laws and principles
associated with the process of variation and believed he could disclose them by the study of the formation of new breeds.
Drawing from what he knew about pigeon breeding and equine varieties, Darwin describes some of his ideas about the
laws of variation in chapter five of Origin:Distinct species present analogous variations; and a variety of one species often
assumes some of the characters of an allied species, or reverts to some of the characters of an early progenitor. These
propositions will be most readily understood by looking to our domestic races. The most distinct breeds of pigeons, in
countries most widely apart, present sub-varieties with reversed feathers on the head and feathers on the feet, characters
not possessed by the aboriginal rock-pigeon; these then are analogous variations in two or more distinct races. 4Darwin
describes that the exact same traits can appear in distinct breeds of pigeons andimportantlythese traits
appeared independently in countries most widely apart. If several breeds arrive with the same characteristics
independently, it is unlikely they do so because of chance. Rather, the pigeon genomes may activate or derepress the same
feather program independently. The effect is that distinct breeds in countries most widely apart acquire the same
characteristics. Over and over the same traits appear in separated populations of organisms as the result of mutations from
within. Animal breeders like exuberant patterns and rarities; that is exactly what they are looking for to select. Aberrant traits
that are normally under stringent negative selection, as might be the case for the pigeons reversed feathers, may readily
become visible as soon as the selective pressure is relieved; that is, when organisms are reared and fed in the protective
environment of captivity. Darwin called the phenomenon of independent acquisition of the same traits analogous variation. It
is a common phenomenon well known to breeders, and Darwin easily found more examples of analogous variation:The
frequent presence of fourteen or even sixteen tail-feathers in the pouter, may be considered as a variation representing the
normal structure of another race, the fantail. I presume that no one will doubt that all such analogous variations are due to
the several races of the pigeon having inherited from a common parent the same constitution and tendency to variation,
when acted on by similar unknown influences. In the vegetable kingdom we have a case of analogous variation, in the
enlarged stems, or roots as commonly called, of the Swedish turnip and Ruta baga [sic] plants which several botanists rank
as varieties produced by cultivation from a common parent: if this be not so, the case will then be one of analogous variation
in two so-called distinct species; and to these a third may be added, namely, the common turnip. According to the ordinary
view of each species having been independently created, we should have to attribute this similarity in the enlarged stems of
these three plants, not to the vera causa of community of descent, and a consequent tendency to vary in a like manner, but
to three separate yet closely related acts of creation. 5Analogous variation originates in the genome. Through rearrangement
and/or transposition of DNA elements, previously silent (cryptic) traits can be activated. The underlying molecular
mechanism cant be merely random; if it were, then Darwin, and other breeders, would not have observed the expression of
the same traits independently of each other. A more contemporary translation of analogous variation would benonrandom (or: non-stochastic) variation, and it implies some sort of mechanism.
Reversions
In the excerpt above, Darwin also describes what he calls reversions. By this term he meant traits that are present in
ancestors, then disappear in first generation offspring, and then reappear in subsequent generations. Darwin acknowledged
that unknown laws of inheritance must exist, but still he talks about the proportion of blood. Reversions are easily explained
as traits present on separate chromosomes, and the inheritance of such traits is best understood from Gregor Mendels
inheritance laws. Through Mendels discovery of the genetic laws that underlie the inheritance of traits associated with
chromosome segregation (a hallmark of sexual reproduction), Mendel gave us a quantum theory of inheritance. He found
that traits are always inherited in well-defined and predictable proportions, and do not just come and go. Darwins
reversions are traits that reappear in later generations due to the inheritance of the same genes (alleles) from both
parents.5 Darwin didnt know about Mendels laws of inheritance, neither did he know about how variation is generated in
genomes. What Darwin described inOrigin, however, is that variation in offspring is a rule of biology. What Darwin described
in isolated species (whether domesticated breeds or island-bound birds) was the result of a burst of abundant speciation
resulting from multipurpose genomes. Variant breeds of pigeons are the phenotypes of a rearranged multipurpose pigeon
genome. The Galpagos finches (with their distinct beaks and body sizes) are the phenotypes of a rearranged multipurpose
finch genome. Where does the variation stem from in populations of Galpagos finches?Darwin was well aware of the
profound lack of knowledge on the origin of variation, and did not exclude mechanisms or laws to drive biological variation:I
have hitherto sometimes spoken as if the variations so common and multiform in organic beings under domestication, and in
a lesser degree in those in a state of nature had been due to chance. This, of course, is a wholly incorrect expression, but it
serves to acknowledge plainly our ignorance of the cause of each particular variation. 6Since Darwins days, almost all
corners of the living cell have been explored and our biological knowledge has expanded greatly. Through a vast library of
data generated by new research in biology, we now have the answers to many questions of a biological nature that had
puzzled Darwin. We may also have the answer to the cause of each particular variation, although we may not be aware of
it (yet). That is not because it is hidden between billions of other books and hard to find. No, it is because of the Darwinian
paradigm. The mechanism(s) that drive biological variations have been elucidated but are not yet recognized as such.One
of the findings of the new biology was that the DNA of most (if not all) organisms contains jumping genetic elements. The
mainstream opinion is that these elements are the remnants of ancient invasions of RNA viruses. RNA viruses are a class of
viruses that use RNA molecule(s) for information storage. Some of them, such as influenza and HIV, pose an increasing
threat to human health. Are virus invasions responsible for all the beautiful intricate complexity of organic beings? Is a virus
a creator? Most likely it is not. Otherwise why would we pump billions of research dollars into research to fight off viruses?
Could it be that mainstream science is mistaken?
The RNA virus paradox
Here is one good reason for believing that mainstream science is indeed mistaken: the RNA virus paradox. It has been
proposed that these RNA viruses have a long evolutionary history, appearing with, or perhaps before, the first cellular life
forms.7 Molecular genetic analyses have demonstrated that genomes, including those of humans and primates, are riddled
with endogenous retroviruses (ERVs), which are currently explained as the remnants of ancient RNA virus-invasions. RNA
virus origin can be estimated using homologous genes found in both ERVs and modern RNA virus families. By using the
best estimates for rates of evolutionary change (i.e. nucleotide substitution) and assuming an approximate molecular
clock,8,9 the families of RNA viruses found today could only have appeared very recently, probably not more than about
50,000 years ago.10 These data imply that present-day RNA viruses may have originated much more recently than our own
species. The implication of a recent origin of RNA viruses and the presence of genomic ERVs poses an apparent paradox
that has to be resolved. I will argue, in order to resolve the paradox, we should abstain from the mainstream idea that ERVs
are remnants of ancient RNA virus invasions.Solving the RNA paradox can only be accomplished by asking questions. First,
we have to ask ourselves, What do scientists mean when they refer to genetic elements as endogenous
retroviruses (ERVs)? In addition, we have to ask, How do ERVs behave, and whatif anyare their functions? ERVs have
been extensively studied in microorganisms, such as bakers yeast (Saccharomices cerivisiae) and the common gut

bacterium Escherichia coli. Most of our knowledge on the mechanisms of transposition of ERVs comes from those two
organisms. In yeast, the ERV known as Ty is flanked by long terminal repeats and specifies two genes, gag and pol, which
are similar to genes found in free operating RNA viruses. This is the main argument why scientists believe RNA viruses and
ERVs are evolutionarily closely related. The long terminal repeats enable the ERV to insert into the hosts DNA. The
transposition and integration is a stringently regulated process and seems to be target or site-specific. 11,12 During the
transpositions of an ERV, the hosts RNA polymerase II makes an RNA template, which is polyadenylated to become
messenger RNA. Thegag and pol mRNAs are translated and cleaved into several individual proteins. The gag gene
specifies a polyprotein that is cleaved into three proteins, which form a capsid-like structure surrounding the ERVs RNA. We
may ask here: why is a capsid involved? It should be noted that single stranded RNA molecules are very sticky nucleotide
polymers and the capsid may prevent the ERV from sticking at wrong places. The capsid may also be required to direct the
ERV to the right spots in the genome. The pol polyprotein is cleaved into four enzymes: protease, reverse transcriptase,
RNase and integrase. Protease cleaves the polyproteins into the individual proteins and then the RNA and proteins are
packed into a retrovirus-like particle. Reverse transcriptase forms a single-stranded DNA molecule from the ERV RNA
template, whereas RNase removes the RNA. The DNA is then circularized and the complementary DNA strand is
synthesized to create a double-stranded, circular copy of the ERV, which is then integrated into a new site in the hosts
genomic DNA by integrases activity. This intricate mechanism for transposition of ERVs seems to be irreducibly complex
(and thus a sign of intelligent design) since all ERVs and RNA viruses use the same or similar genetic components.
Variation-inducing genetic elements (VIGEs).
What can the function, if any, of ERVs be? If we follow the mainstream opinion, ERVs integrated into the genomes a very
long time ago as viral infections. Currently, ERVs are not particularly helpful. They merely hop around in the genome as
selfish genetic elements that serve no function in particular. They are mainly upsetting the genome. Long ago, however,
RNA viruses are alleged to have significantly contributed to evolution by helping to shape the genome.Its hard to imagine
this story to be true, and not only because of the RNA virus paradox. Modern viruses usually do not integrate into the DNA of
the germ line-cells; that is, the genes of an RNA virus dont usually become a part of the heritable material of the infected
host. If we obey the uniformitarian principle, we are allowed to argue: What currently doesnt happen didnt happen a long
time ago, either. To answer the question raised above, we must start finding out more about some biological characteristics
of a less complicated jumping genetic element, the so-called insertion-sequence (IS) element. IS elements are DNA
transposons abundantly present in the genomes of bacteria. IS elements share an important characteristic with ERVs:
transposition. Genome shuffling takes place in bacteria so frequently that we can hardly speak of a specific gene order. The
shuffling of pre-existing genetic elements may unleash cryptic information instantly as the result of position effects. Shuffling
seems to be an important mechanism to generate variation. But what is the mechanism for genome shuffling? The answer
to this question comes unexpectedly from evolutionary experiments, in which genetic diversity (evolutionary change) was
determined between reproducing populations of E. coli. During the breeding experiment, which ran for two decades, it was
observed that the number and location of IS (insertion sequence) elements dramatically changed in evolving populations,
whereas point mutations were not abundant.13After 10,000 generations of bacteria, the genomic changes were mostly due to
duplication and transposition of IS elements. A straightforward conclusion would thus be that jumping genetic elements,
such as the IS elements, were designed to deliberately generate variationvariation that might be useful to the organism. In
2004, Lenski, one of the co-authors of the studies, demonstrated that the IS elements indeed generate fitness-increasing
mutations.14 In E. coli bacteria IS elements activate crypticor silentcatabolic operons: a set of genetic programs for food
digestion. It has been reported that IS element transposition overcomes reproductive stress situations by activating cryptic
operons, so that the organism can switch to another source of food. IS elements do so in a regulated manner, transposing at
a higher rate in starving cells than in growing cells. In at least one case, IS elements activated a cryptic operon during
starvation only if the substrate for that operon was present in the environment. 15It is clear that in Lenskis experiments, IS
elements did not evolve over night. Rather, the IS elements reside in the genome of the original strain. During the two
decades of breeding, the IS elements duplicated and jumped from location to location. There was ample opportunity to
shuffle genes and regulatory sequences, and plenty of time for the IS elements to integrate into genes or to simply redirect
regulatory patterns of gene expression. Microorganisms may thus induce variation simply through shuffling the order of
genes and put old genes in new contexts: variation through position effects that can be inherited and propagated in time. Its
hardly an exaggeration to state that jumping genetic elements specified by the bacteriums genome generated the new
phenotypes.Transposition of IS elements is mostly characterized by local hopping, meaning that novel insertions are usually
in the proximity of the previous insertion and may be a more-or-less random phenomenon; the site of integration isnt
sequence dependent. Bacteria have a restricted set of genes and they divide almost indefinitely. Therefore, sequencedependent insertion and stringent regulation of transposition may not be required for IS-induced reshuffling of bacterial
genomes; in a population of billions of microorganisms all possible chromosomal rearrangements may occur due to
stochastic processes. In higher organisms the order of genes in the chromosomes is more important, but there is no
reason to exclude jumping genetic elements as a factor affecting the expression of genetic programs through position
effects. Transposable elements may therefore be a class of variation-inducing genetic elements (VIGEs) in higher
organisms. Indeed, ERVs, LINEs and SINEs resemble IS elements in bacteria in that they are able to transpose. In fact,
these elements may be responsible for a large part of the variability observed in higher organisms and may even be
responsible for adaptive phenotypes. The genomic transposition of VIGEs is not just a random process. As observed
for Ty elements in yeast, integration of all VIGEs may originally have been designed as site or sequence specific. It should
be noted that VIGEs might qualify as redundant genetic elements, of which the control over translocation may have
deteriorated over time.
VIGEs in humans
Mobile genetic elements make up a considerable part of the eukaryotic genome and have the ability to integrate into the
genome at a new site within their cell of origin. Mobile genetic elements of several classes make up more than one third of
the human genome.Human endogenous retroviruses (ERVs) are, as with yeast ERVs, first transcribed into RNA molecules
as if they were genuine coding genes. Each RNA is then transformed into a double stranded RNA-DNA hybrid through the
action of reverse transcriptase, an enzyme specified by the retrotransposon itself. The hybrid molecule is then inserted back
into the genome at an entirely different location. The result of this copy-paste mechanism is two identical copies at different
locations in the genome. More than 300,000 sequences that classify as ERVs have been found in the human genome,
which is about 8% of the entire human DNA.16

Figure 1. Variation-inducing genetic


elements (VIGEs) are found throughout
all biological domains, ranging from
bacteria to mammals. In yeast, insects
and mammals we observe similar
designs. (Homologous sequences are
indicated by the same colour).
Long
terminal
repeats
retrotransposons (LTR
retrotransposons) are transcribed into
RNA and then reverse transcribed into a
RNA-DNA hybrid and reinserted into the
genome. LTRs and retroviruses are very
similar
in
structure.
Both
contain gag and pol genes (figure 1),
which encode a viral particle coat (GAG), reverse transcriptase (RT), ribonuclease H (RH) and integrase (IN). These genes
provide proteins for the conversion of RNA into complementary DNA and facilitate insertion into the genome. Examples of
LTR retrotransposons are human endogenous retroviruses (HERVs). Unlike RNA retroviruses, LTR retrotransposons lack
envelope proteins that facilitate movements between cells.Non-LTR retrotransposons, such as long interspersed elements
(LINEs), are long stretches (4,0006,000 nucleotides) of reverse transcribed RNA molecules. LINEs have two open reading
frames: one encoding an endonuclease and reverse transcriptase, the other a nucleic acid binding protein (figure 1). There
are approximately 900,000 LINEs in the human genome, i.e. about 21% of the entire human DNA. LINEs are found in the
human genome in very high copy numbers (up to 250,000).17Short interspersed elements (SINEs) constitute another class
of VIGEs that may use an RNA intermediate for transposition. SINEs do not specify their own reverse transcriptase and
therefore they are retroposons by definition. They may be mobilized for transposition by using the enzymatic activity of
LINEs. About one million SINEs make up another 11% of the human genome. They are found in all higher organisms,
including plants, insects and mammals. The most common SINEs in humans are Alu elements. Alu elements are usually
around 300 nucleotides long, and are made up of repeating units of only three nucleotides. Some Alu elements secondarily
acquired the genes necessary to hop around in the genome, probably though recombination with LINEs. Others simply
duplicate or delete by means of unequal crossovers during cell divisions. More than one million copies of Alu elements,
often interspersed with each other, are found in the human genome, mostly in the non-coding sections. Many Alu-like
elements, however, have been found in the introns of genes; others have been observed between genes in the part
responsible for gene regulation and still others are located within the coding part of genes. In this way SINEs affect the
expression of genes and induce variation. Alu elements are often mediators of unequal homologous recombinations and
duplications.18
Figure 2. Schematic view of the central role VIGEs may play to generate
variation, adaptations and speciation events. Lower part: VIGEs may directly
modulate the output of (morpho)genetic algorithms due to position effects. Upper
part: VIGEs that are located on different chromosomes may be the result of
speciation events, because their homologous sequences facilitate chromosomal
translocations and other major karyotype rearrangements.Repetitive triplet
sequences (RTSs) present in the coding regions of proteins are a class of VIGEs
that cannot actively transpose. RTSs are usually found as an intrinsic part of the
coding region of proteins. For instance, RTSs can be formed by a tract of glycine
(GGC), proline (CCG), or alanine (GCC). Usually RTSs form a loop in the
messenger (m)RNA that provides a docking site for chaperone molecules or
proteins involved in the mRNA translation. RTSs may increase or decrease in
length through slippery DNA polymerases during DNA replication.
Conclusions and outlook
Now that we have redefined ERVs as a specific class of VIGEs, which were
present in the genomes from the day they were created, it is not difficult to see
how RNA viruses came into being. RNA viruses have emerged from VIGEs.
ERVs, LINEs and SINEs are the genetic ancestors of RNA viruses. Darwinists are wrong in promoting ERVs as remnants of
invasions of RNA viruses; it is the other way around. In my opinion, this view is supported by several recent observations.
RNA viruses contain functional genetic elements that help them to reproduce like a molecular parasite. Usually, an RNA
virus contains only a handful of genes. Human Immunodeficiency virus (HIV), the agent that causes AIDS, contains only
eight or nine genes. Where did these genes come from? An RNA world? From space? The most parsimonious answer is:
the RNA viruses got their genes from their hosts.The Rous arcoma virus (RSV), which has the ability to cause tumours, has
only 4 genes: gag, pol, envand src. In addition, the virus is flanked by a set of repeat sequences that facilitate integration
and promote replication. Gag, pol and env are genes commonly present in ERVs. The src gene of RSV is a modified hostderived src gene that normally functions as a tyrosine kinasea molecular regulator that can be switched on and off in order
to control cell proliferation. In the virus, the regulator has been reduced to an on-switch only that induces uncontrolled cell
proliferation. The src gene is not necessary for the survival of RSV, and RSV particles can be isolated that have only
the gag, pol and env genes. These have perfectly normal life cycles, but do not cause tumours in their host. It is clear the
virus picked up the src gene from the host. Why wouldnt the whole vector be derived from the host? VIGEs may easily pick
up genes or parts thereof as the result of an accidental polymerase II read-through. This will increase the genetic content of
the VIGE because the gene located next to the VIGE will also be incorporated. An improper excision of VIGEs may also
include extra genetic information. Imagine for instance HERV-K, a well-known human-specific endogenous retrovirus,
transposing itself to a location in the genome where it sits next to thesrc gene. If in the next round of transposition a part of
the src gene was accidentally added to the genes of HERV-K, it has already transformed into a fully formed RSV (see figure
3). It can be demonstrated that most RNA viruses are built of genetic information directly related to that of their hosts.

Figure 3. RNA viruses originate from VIGEs


through the uptake of host genes. In the
controlled and regulated context of the host DNA,
genes and VIGEs are harmless. A combination of
a few genes integrated in VIGEs may start an
uncontrolled replication of VIGEs. In this way,
VIGEs may take up genes that serve to form the
virus envelope (to wrap up the RNA molecule
derived from the VIGE) and genes that enable
them to leave and re-enter host cells. Once
VIGEs become full-blown shuttle vectors
between hosts, they act as virulent, devastating
and uncontrolled replicators. Hence, harmless
VIGEs may degenerated into molecular parasites in a similar way normally harmless cells turn into tumors once they lose
the power to control cell replication. VIGEs are on the basis of RNA viruses, not the other way around. The scheme outlined
here shows how the Rous sarcoma virus (RSV) may have formed from a VIGE that integrated the env gene and part of
the src gene (a proto-oncogene: for details see text).The outer membranes of influenza viruses, for instance, are built of
hemagglutinin and neuraminidase molecules. Neuraminidase is a protein that can also be found in the genomes of higher
host organisms, where it serves the function to modify glycopeptides and oligosaccharides. In humans, neuraminidase
deficiency leads to neurodegenerative lysosomal storage disorders: sialidosis and galactosialidosis.19 Even so-called orphan
genes, genes that are only found in viruses, can usually be found in the host genomes. Where? In VIGEs!To become a
shuttle-vector between organisms, all that is required is to have the right tools to penetrate and evade the defenses of the
host cell. HIV, for instance, acquired part of the gene of the hosts defence system (the gp120 core) that binds to the human
beta-chemokine receptor CCR5.20These observations make it plausible that all RNA viruses have their origin in the genomes
of living cells through recombination of hosts DNA elements (genes, promoters, enhancers). Every now and then such an
unfortunate recombination produces a molecular replicator: it is the birth of a new virus. Once the virus escapes the
genome and acquires a way to re-enter cells, it has become a fully formed infectious agent. It has long been known that
bacteria use genes acquired from bacteriophagesi.e. bacterial viruses that insert their DNA temporarily or even
permanently into the genome of their hostto gain reproductive advantage in a particular environment. Indeed, work
reaching back decades has shown that prophage (the integrated virus) genes are responsible for producing the primary
toxins associated with diseases such as diphtheria, scarlet fever, food poisoning, botulism and cholera. Diseases are
secondary entropy-facilitated phenomena. Virologists usually explain the evolution of viruses as recombination: that is, a
mixing of pre-existing viruses, a reshuffling and recombination of genes. 21 In bacteria, viruses may therefore be recombined
from plasmids carrying survival genes and/or transposable genetic elements, such as IS elements.
Discussion
Where did all the big, small and intermediate noses come from? Why are people tall, short, fat or slim? What makes
morphogenetic programs explicit? The answer may be VIGEs. It may turn out that the created kinds were designed with
baranomes that had an ability to induce variation from within. This radical view implies that the baranome of man may have
been designed to contain only one morphogenetic algorithm for making a nose. But the program was implicit. The program
was designed in such way that a VIGE easily integrated into it, becoming a part of it, hence making the program explicit.
Most inheritable variation we observe within the human population may be due to VIGEsElements that affect
morphogenetic and other programs of baranomes. It should be noted that a huge part of the genomic sequences are
redundant adaptors, spacers, duplicators, etc., which can be removed from the genome without major affects on
reproductive success (fitness). In bacteria, VIGEs have been coined IS elements; in plants they are known as transposons;
and in animals, they are called ERVs, LINEs, SINEs, and microsatellites. What these elements are particularly good at is
inducing genomic variation. It is the copy number of VIGEs and their position in the genome that determine gene expression
and the phenotype of the organism. Therefore, these transposable and repetitive elements should be renamed after their
function: variation-inducing genetic elements. VIGEs explain the variations Darwin referred to as due to chance.I will
address the details of a few specific classes of VIGEs and argue why modern genomes are literally riddled with VIGEs in a
future article. With the realization that RNA viruses have emerged from VIGEs the RNA paradox is solved. For many
mainstream scientists this solution will be bothersome because VIGEs were frontloaded elements of the baranomes of
created kinds and that implies a young age for their common ancestor and that all life is of recent origin.
The design of life: part 4variation-inducing genetic elements and their function
by Peter Borger
Endogenous retroviruses (ERVs) are claimed to be the selfish remnants of ancient RNA viruses that invaded the cells of
organisms millions of years ago and now merely free-ride the genome in order to be replicated. This selfish gene thinking
still dominates the public scene, but well-informed biologists know that the view among researchers is rapidly changing.
Increasingly, ancient RNA viruses and their remnants are being thought of as having played (and still do) a significant role in
protein evolution, gene structure, and transcriptional regulation. As argued in part 3 of this series of articles, ERVs may be
the executors of genetic variation, and qualify as specifically designed variation-inducing genetic elements (VIGEs)
responsible for variation in higher organisms. VIGEs induce variation by duplication, transposition, and may even rearrange
chromosomes. This extraordinary claim requires extraordinary scientific support, which is present throughout this paper. In
addition, the VIGE hypothesis may be a framework to understand the origin of diseases and explain rapid speciation events
through facilitated chromosome swapping.The idea that mobile genetic elements are involved in creating variation is not
new. Barbara McClintock, who discovered the first mobile genetic elements in maize, was also the first to recognize the true
nature of such jumping genetic elements. In 1956, she suggested that transposons (as she coined them) function as
molecular switches that could help determine when nearby genes turn on and off. Her key insight was that all living systems
have mechanisms available to restructure and repair the chromosomes. When it was discovered that more than half of the
human genome consists of (remnants of) mobile elements, McClintocks ideas were revived and further developed by Roy
Britten and Eric Davidson.1 It is only recently that we have begun to understand the power of VIGEs (variation-inducing
genetic elements) as genetic regulators and switches. A team of investigators lead by Haussler recently provided direct
evidence that even when a short interspersed nucleotide element (SINE) lands at some distance from a gene, it can take on
a regulatory role with powerful regulatory functions.2Haussler and his colleagues then looked at a particular examplea
copy of the ultra-conserved element that is near a gene called Islet 1 (ISL1). ISL1 produces a protein that helps control the
growth and differentiation of motor neurons. In the laboratory of Edward Rubin at the University of California, Berkeley,
postdoctoral fellow Nadav Ahituv combined the human version of the LF-SINE sequence with a reporter gene that would

produce an easily recognizable protein if the LF-SINE were serving as its on-off switch. He then injected the resulting DNA
into the nuclei of fertilized mouse eggs. Eleven days later, he examined the mouse embryos to see whether and where the
reporter gene was switched on. Sure enough, the gene was active in the embryos developing nervous systems, as would
be expected if the LF-SINE copy were regulating the activity of ISL1.3This excerpt shows that some functions of SINEs are
easily uncovered because they are directly affecting the expression of a particular gene. However, most functions of SINEs
may not be as easily detected as described of above, because they can integrate in gene desertsregions of the genome
where the chromosomes are devoid of any recognizable protein-coding genesor they may only subtly affect expression of
morphogenetic programs. Gene expression patterns largely determine how cells behave and determine the morphology of
organisms. VIGEs integrated in such genetic programs will change expression patterns of genes that will result in different
cellular behaviour and morphology. Whether the ultimate effect on the phenotype of the organism can be predicted,
however, remains to be established. This is largely due to the fact that we still do not know what morphogenetic algorithms
look like. Of course, biologists have argued that evolution and development are determined by homeobox (HOX) genes, but
HOX genes are merely executors of developmental (or morphogenetic) programs; they are not the programs themselves.In
another study by the same group, thousands of short identical DNA sequences that are scattered throughout the human
genome were analyzed. Many of those sequences were located in gene deserts, which are in fact so clogged with
regulatory DNA elements that they have recently been renamed regulatory jungles. But what do they regulate? The answer
could be morphogenesis. Most of the short DNA elements cluster near genes that play a decisive role during an organisms
first weeks after conception. The elements help to orchestrate an intricate choreography of when-and-where developmental
genes are switched on and off as the organism lays out its body plan. These elements may provide a sort of blueprint for
how to build the animal. The exact mechanism as to how such sequences may function as a plan to build an animal is not
entirely clear, but the DNA elements are particularly abundant near genes that help cells to stick together. That stickiness is
important in an organisms early life phase because these genes help cells to migrate to the right location and to form into
organs and tissues of the correct shape. The 10,402 short DNA sequences studied by Bejerano are derived from
transposable genetic elementsretrotransposons that duplicate themselves and hop around the genome. Apparently,
transposable genetic elements are not what they have been mistakenly thought to be: mess makers. Indeed, the view that
transposable elements are just bad stuff is rapidly changing. In an interview with Science Daily, Bejerano says:
We used to think they were mostly messing things up. Here is a case where they are actually useful.4
The genome is literally littered with thousands of transposable elements. The word is that when ancient retroviruses slipped
bits of their DNA into the primate genome millions of years ago, they successfully preserved their own genetic legacy. 5 It is
hard to imagine that they all have functions, but their presence could certainly determine or fine-tune the output of nearby
genes. In this way they may create subtle, but novel, variation. Bejerano and Hausslers research has already identified a
handful of transposons that serve as regulatory elements, but it is not clear how common the phenomenon might be. The
2007 study showed that the phenomenon may be a general one:
Now weve shown that transposons may be a major vehicle for evolutionary novelty. 4
The new findings indeed show that, in many cases, transposable elements function as regulators of gene output, but major
vehicles for evolution from microbe to man they are not. The transposition of jumping genetic elements may certainly affect
gene expression patterns, but it does not follow that they produce new genetic information. Considering the biological data,
it seems reasonable that transposable elements are present in the genome to deliberately induce biological variation.
Transposable elements thus qualify as variation-inducing genetic elements (VIGEs), and by leaving copies, they make sure
the new variation is heritable. The transposable elements present in regulatory jungles do not produce new biological
information, but they induce variation in the genetic algorithms and may underlie rapid adaptive radiation from uncommitted
pluripotent genomes. The regulatory jungles may provide an active reservoir of VIGEs that put existing genes in new
regulatory environments.
Regulated activity of VIGEs
The chromosome of the E. coli strain K12 includes three cryptic operons (linear genetic programs that encode programs to
metabolize three alternative sugars): one for cellobiose, one for arbutin and one for salicin. The organization of those
operons is like a normal substrate-induced bacterial operon; but the operons themselves are abnormal in that they are
cryptic (silent) in wild-type strains. Even in the presence of alternative sugars the operons are not activated, which indicates
that these bacteria dont readily use alternative sugars. Unused cryptic operons are redundant genetic programs that are not
observed by natural selection:As cryptic genes are not expressed to make any positive contribution to the fitness of the
organism, it is expected that they would eventually be lost due to the accumulation of inactivating mutations. Cryptic
genes would thus be expected to be rare in natural populations. This, however, is not the case. Over 90% of natural isolates
of E. coli carry cryptic genes for the utilization of beta-glucoside sugars. These cryptic operons can all be activated by IS
[insertion-sequence] elements, and when so activated allow E. colito utilize beta-glucoside sugars as sole carbon and
energy sources.6The excerpt shows that operons are kept inactive by repressors; that is, proteins that sit on the DNA of the
operon to ward off the nanomachines responsible for gene expression. Operons will only be active in bacteria that dont
have a functional gene coding for the repressors. Disrupting the repressor gene releases the cryptic programs. Thats where
the VIGEs come in. The transposition and integration of an IS element into the silencer elements is the mutational event that
activates the cryptic operon. Usually, the lack of an appropriate carbon and energy source triggers transposition of IS
elements. The transposition of IS elements appears to be regulated by starvation, and the integration in the repressor gene
is not utterly random. For instance, position 472 in the ebgR gene in the ebg operon of E. coli is a hotspot for integration of
IS elements, but only under starvation conditions. VIGEs may thus accumulate and integrate at well-defined positions in the
genome; this indicates a site-specific mechanism.In the fruit fly, some non-LTR (long terminal repeats retrotransposons)
integrate at very specific sites, but some others have been shown to integrate more or less at random. The specificity is
determined by endonucleases, enzymes that cut the DNA.7 Assuming VIGEs are part of a designed genome, we must
expect that their transposition and activity can be controlled and regulated. To avoid deleterious effects on the host and
retrotransposon, we may expect that the activity of VIGEs is regulated both by retrotransposon-and host-encoded factors.
Indeed, the mechanism of transposition seems to be dictated by the species in which the VIGEs operate. Recent research
has shown that in zebra fish the transposable element known as NLR integrant usually carries a few extra nucleotides at the
far end of the sequence, but it is not expressed in human cells.8 This observation would argue for the involvement of host
specific protein machinery in transpositionone more argument for the design origin of VIGEs.From the design perspective,
we may expect that the activity of VIGEs used to be a tightly controlled process. This is because the genomes in which they
operate also specify control factors: retroviral restriction factors. The restriction factors are proteins with the ability to bind to
retroviral capsid proteins and target them for degradation. Several restriction factors have been identified, including Fv1,
Trim5-alpha and Trim5-CypA.9 These factors share the common property of containing sequences that promote selfassociation: that is, they can assemble themselves. This fact, together with the observation that the restriction factors are
encoded by unrelated genes, is clear evidence of purposeful design. Retroviral restriction factors play an important role in

innate immunity against invading RNA viruses. For instance, Trim5-alpha binds directly to the incoming retroviral capsid core
and targets its premature disassembly or destruction. 10 In addition, some integrated VIGEs show evolutionary-tree
deviations, indicating a sequence-specific integration/excision mechanism. For instance, Alu HS6 is present in human,
gorilla and orangutan, but not in chimpanzee (see figure 1). This highly peculiar observation prompted the investigators to
consider the possibility of the specific excision of this Alu element from the chimpanzees genome.11 Precise excision implies
precise integration.

Figure 1. The Alu HS6 insertion sites in human, chimpanzee, gorilla, orangutan and owl monkey. Note the complete
absence in chimpanzee and owl monkey of any evidence for an extraction site. This suggests a highly specific mechanism
for integration and/or extraction. Alternatively, the sequences are a molecular falsification of the common descent of
primates.
Biologists specializing in synthetics at the Johns Hopkins University have built, from scratch, a LINE1-based retrotransposon
a genetic element capable of jumping around in the mouse genome. The man-made retrotransposon was designed to be
a far more effective jumper than natural retrotransposons; indeed, it inserts itself into many more places in the
genome.12,13 Why do not all LINEs jump so effectively? The scientists that constructed the synthetic LINE changed the
regulator sites used in transposition. Native LINE1 elements are relatively inactive in mice when they are introduced into the
mouse genome as transgenes. The synthetic LINE1-based element, ORFeus, contains two synonymously recoded ORFs
relative to mouse L1 and is far more active. This indicates that the integration and excision of native LINE1 elements are
controlled and regulated by an as yet unknown mechanism.VIGEs qualify as redundant genetic elements that can simply be
erased from the genome without fitness effects. As long as VIGEs do not upset critical genomic functions and do affect
reproductive success of the carrier, they are selectively neutral. Therefore, not only VIGEs, but also the mechanisms by
which they integrate, may readily wither and degrade due to accumulation of debilitating mutations. The control over
integration and activity we observe today may be less stringent compared to how it was originally designed. The originally
fine-tuned control for excision and transposition may have deteriorated over time and what is left today are more or less free
moving elements that may predominantly cause havoc when they integrate in the wrong location. It is easy to understand
how, for instance, endonucleases became less specific through mutations. This view may also explain why VIGEs are often
found associated with heritable diseases. As long as VIGE activity and integration do not significantly affect the fitness of the
organisms in which they operate, they are free to copy and paste themselves along the genome. Indeed, inactivating VIGEs
have been observed in genes not immediately required for reproduction. The GULO gene, which qualifies as a redundant
gene in populations with high vitamin C intake, has been hit several times by VIGEs and this may have contributed to
pseudogenization of GULO in humans.14Over time, VIGEs may have become increasingly detrimental to the hosts genome.
That is because information that regulates the integration and activity of VIGEs is subject to mutation. Some VIGEs have
been associated with susceptibility or resistance to diseases. In asthma, increased susceptibility appears to be associated
with microsatellite DNA instability (a term used for copy-number differences in repetitive DNA sequences). 15 Psoriasis is also
associated with HERV expression.16 It should be clear that deregulated and uncontrolled VIGEs cause havoc when they
integrate with and disrupt functional parts of genes.From the vantage of design, VIGE transpositions would make sense
during meiosis, which is the process leading to the formation of gametes. Controlled activity of VIGEs during meiosis may be
responsible for variation that can be passed on to the offspring. Although information is scant, it has been shown in
fungi17 and plants18 that VIGEs become active during meiosis and even have mechanisms to silence deleterious bystandereffects, such as deleterious point mutations.17 This shows transposable elements function to induce genetic variation,
providing the flexibility for populations to adapt successfully to environmental challenges. In chimpanzees, for instance, it
has been documented that large blocks of compound repetitive DNA, which have demonstrated retrotransposon function,
induce and prolong the bouquet stage in meiotic prophase and affect chiasm formations. 19 This may seem like a mouthful,
but it merely means that these repetitive genetic elements facilitate sister-chromosome exchanges when reproductive cells
(sperm and eggs) are being generated. Mammalian VIGEs, in particular Alu sequences, have the ability to induce genetic
recombination and duplications and contribute to chromosomal rearrangements, and they may account for the major part of
variation observed in humans. The methylation pattern of Alu sequences possibly determine activity and/or serve as
markers for genomic imprinting or in maintaining differences in male and female meiosis. 21
VIGEs and the human family
When short triplet repeat units are present in the coding part of a gene, they may even have functional consequences.
There is evidence that repeat units in the Runx2 gene formed the bent snout of the Bullterrier in a few
generations.22 Likewise, in mice and dogs, having five or six toes is determined by a repeat unit in the Alx4 gene.23 These
novel phenotypes can form almost over night, i.e. within one generation. Repetitive coding triplets that can be gained or lost
provide another mechanism to generate (instant) variation. It should be noted that this mechanism leads to reversible
genetic change, because a lost repetitive unit can readily be added back through duplication of a preexisting one, and vice
versa. Therefore, the RTS mechanism may explain seasonal changes in beak size observed for Galapagos finches,
adaptive phenotypes in Australian snakes and the evolution of the Cichlid varieties in African lakes.If we accept the idea of
deliberately designed VIGEs, we may also expect these elements to have played an important role in determining the
variety of human phenotypes. In other words, human races are the result of the activity of VIGEs! Biologists used to think
that our genomes all had the same basic structurethe same number of genes, in roughly the same order, with a few minor
differences in the sequence of DNA bases. Now, technologies that compare whole human genomes are revealing that this
picture is far from complete. Michael Wigler at Cold Spring Harbor Laboratory provided the first evidence that human
genomes are strikingly variable: his group showed marked differences in the copy number of protein-coding
genes.24 Apparently, some people have more copies of certain genes and, large-scale copy number polymorphisms (CNPs)
(about 100 kilobases and greater) contribute substantially to genomic variation between individuals. 25 In addition, people not
only carry different copy numbers of parts of our DNA they also have varying numbers of deletions, insertions and other
major rearrangements in their genomes.In 2005, Evan Eichler of the University of Washington reported 297 locations in the
genome where different individuals have different forms of major structural variations. At these spots some carry a major
deletion, for example, or an extra hundred bases of DNA. Differences between individuals were found in the protein-coding
genes; structural differences were also observed between individual genomes.26 From these and other studies we now know
that every one of us shares only about 99% of our DNA with all the other people on Earth. 27 The difference is due to
repetitive sequences that easily amplify or delete parts from the genome. With this, we have discovered another class of
VIGEs. The highly variable repetitive sequences also explain why genetic screening methods are so reliable nowadays: they

detect copy-number differences and hence are capable of discriminating between the DNA of a father and his son. Yes,
fathers and sons apparently differ at the level of VIGEs!A comparison of Asian and Caucasian people showed that 25% of
more than 4,000 protein-coding genes had significantly different expression patterns. Some gene expression levels differed
as much as twofold.28 The researchers commented that these findings support the idea that there are genetically
determined characteristics that tend to be clustered in different ethnic groups. Some genes are simply not expressed at all,
or are simply not present in the genomes. For instance, the gene UGT2B17 is deleted more often in Asians than in
Caucasians, and has a mean expression level that was more than 20 times greater in Caucasians relative to Asians. How
can such big differences be explained? Of course, single nucleotide polymorphisms (SNP; i.e. point mutations) in regulatory
sequences could affect gene regulation patterns. It is not clear, however, whether the SNPs themselves might be regulating
gene expression or whether they travel together with other DNA thats the regulator. We may also expect VIGEs to be
responsible for differences observed between human races.
VIGEs and chromosome 2
Human chromosome 2 looks as if it is the product of the fusion of two chromosomes that we find in chimpanzees as
chromosome 12 and 13. Therefore, some Darwinists take human chromosome 2 as the ultimate evidence for common
descent with chimpanzees. We know that a fusion of two ancestral chromosomes would have produced human
chromosome 2 with two centromeres. Currently, human chromosome 2 has only one centromere, so there must be
molecular evidence for remnants of the other. In 1982, Yunis and Prakash studied the putative fusion site of chromosome 2
with a technique known as fluorescence in situ hybridization (FISH) and reported signs of the expected centromere.29In
1991, another study also reported signs of the centromere.30 In 2005, after the complete sequencing of human chromosome
2, we would have expected full proof of the ancestors centromere. However, even after intense scrutiny there are still
only signs of the centromere. If signs of the centromere were already observed in 1982, why can it not be proved in the 2005
sequence analysis? Apparently, the site mutated at such high speed it is no longer recognizable as a centromere:During the
formation of human chromosome 2, one of the two centromeres became inactivated (2q21, which corresponds to the
centromere of chromosome 13) and the centromeric structure quickly deteriorated. 31Why would it quickly deteriorate? Why
would this region deteriorate faster than neutral? A close up scrutiny in 2005 showed the region that has been interpreted as
the ancestors centromere to be built from sequences present in 10 additional human chromosomes (1, 7, 9, 10, 13, 14, 15,
18, 21 and 22) as well as a variety of other genetic repeat elements that were already in place before the fusion
occurred.31 The sequences interpreted as ancient centromere are merely repetitive sequences and may actually qualify as
(deregulated) VIGEs.The chimpanzee and human genome projects demonstrated that the fusion did not result in loss of
protein coding genes. Instead, the human locus contains approximately 150,000 additional base pairs not found in
chimpanzee chromosome 12 and 13 (now also known as 2A and 2B). This is remarkable: why would a fusion result
in more DNA? We would rather have expected the opposite: the fusion would have left the fused product with less DNA,
since loss of DNA sequences is easily explained. The fact that humans have a unique 150 kb intervening sequence
indicates it may have been deliberately planned (or designed) into the human genome. It could also be proposed that the
150 kb DNA sequence demarcating the fusion site may have served as a particular kind of VIGE, an adaptor sequence for
bringing the chromosomes together and facilitating the fusion in humans.Another remarkable observation is that in the
fusion region we find an inactivated cobalamin synthetase (CBWD) gene. 32 Cobalamin synthetase is a protein that, in its
active form, has the ability to synthesize vitamin B12 (a crucial cofactor in the biosynthesis of nucleotides, the building
blocks of DNA and RNA molecules). Deficiency during pregnancy and/or early childhood results in severe neurological
defects because of impaired development of the brain. The Darwinian assumption is that the cobalamin synthetase gene
was donated by bacteria a long time ago and afterwards it was inactivated. Nowadays, humans must rely on
microorganisms in the colon as well as dietary intake (a substantial part coming from meat and milk products) for their
vitamin B12 supply. It is also noteworthy that humans have several copies of inactivated cobalamin-synthetase-like genes
on a number of locations in the genome, whereas chimpanzees only have one inactivated cobalamin synthetase gene. That
the fusion must have occurred after man and chimp split is evident from the fact that the fusion is unique to
humans:Because the fused chromosome is unique to humans and is fixed, the fusion must have occurred after the humanchimpanzee split, but before modern humans spread around the world, that is, between 6 and 1 million years ago. 32The
molecular analyses show we are more unique than we ever thought we were, and this is in complete accordance with
creation. Apparently the fusion of two human chromosomes that took place may have been the result of an intricate
rearrangement or activation of repetitive genetic elements after the Fall (as part of, or executors of, the curse following the
Fall)
and
inactivated
the
cobalamin synthetase gene. The
inactivation of the gene may have
reduced peoples longevity in a
similar way as the inactivation of
the GULO gene, which is crucial
to
vitamin
C
synthesis.14 Understanding
the
molecular properties of human
chromosome 2 is no longer
problematic if we simply accept
that humans, like the great apes,
were originally created with 48
chromosomes. Two of them
fused to form chromosome 2
when mankind went through a
severe
bottleneck.33 And,
as
argued above, the fusion was
mediated by VIGEs (see figure
2).
Figure 2. Putative mechanism for
how the human chromosome 2
formed through the fusion of two
ancestral chromosomes p2 and
q2, which are similar to chimpanzee chromosome 12 and 13). Like the great apes, originally the human baranome may
have contained 48 chromosomes. A) Independent transposition events may have led to the integration of a relative small
variation-inducing genetic element (VIGE). B) Extended duplication events of the VIGE may have resulted in rapid

expansion of the region in both p2 and q2, preparing it to become an adapter sequence required for fusion. C) The
expanded homologous regions align and facilitate the fusion of the chromosomes. The fusion region (2q21) and other parts
of the modern human genome still shows the remnants of this catastrophic event that only occurred in humans: the
cobalamin synthetase gene was inactivated and several inactive copies, which are not found in the chimpanzee, scattered
throughout the genome. Speculative note: Before the great flood, and probably shortly after, a balancing dynamics of both
48 and 46 chromosomes may have been present in the human family. This may explain the two extreme cranial
morphologies present in the human fossil record. The Homo erectus/Neandertal humans may have had a karyotype
comprised of 48 chromosomes (non-fused p2 and q2), whereas the other humans had 46 (fused p2 and q2).
The upside-down world
The p53 protein is a mammalian transcription factor that functions as the main switch controlling whether cells divide or go
into apoptosis (programmed cell death, which is sometimes required for severely damaged cells that may become tumours).
Scientists have long wondered how p53 gained the ability to turn on and off more than 1200 genes related to cell division,
DNA repair and programmed cell death. Without the p53 control system organisms would not function: all life would have
died as bulky tumors.Biologists at the University of California now claim that ancient retroviruses helped p53 to become an
important master gene regulator in primates.34 An RNA virus invaded the genome of our common ancestor, jumped into
hundreds of new positions throughout the human genome and spread numerous copies of repetitive DNA sequences that
allowed p53 to regulate many other genes, the team contends. Studies such as these prompted Darwinians to change their
minds about jumping genetic elements. In other words, a randomly hopping ERV provided the human genome with carefully
regulated decision-making machinery. The idea is beyond reasonable belief. Darwinists tend to mix things up. What really
happened in the human genome is a read-through of polymerase II in a VIGE that was next to a gene that already contained
a binding site for p53. Or maybe the VIGE was excised improperly, taking a bit of a flanking gene containing the p53 binding
site. Next, the modified VIGE amplified, transposed, amplified and so on. That explains this family of transposons. A similar
story can be told for the syncytin gene, which encodes a protein of the mammalian placenta that helps the fertilized egg to
become embedded in the uterus wall. Since syncytin has also been found on a transposable element, 35 mammals are
alleged to have obtained the gene from an RNA virus that infected a mammalian ancestor millions of years ago. It is more
likely, however, that syncytin was captured by a VIGE.In bacteria it is often observed that genes that convey a specific
advantageous character are transmitted via plasmids. Plasmids often contain genes for alternative metabolic routes or
genes that provide resistance to antibiotics, and they replicate independently from the hosts genome. Plasmids easily
shuttle between microorganisms via a DNA uptake-process known as transformation (or horizontal gene transfer). The
uptake of plasmids is regulated and controlled, and is DNA sequence dependent. The result of DNA transformations is rapid
adaptation to, for instance, antibiotics. Likewise, viruses replicate independently from the genomic DNA, leaving many
copies and easily transferring from one organism to another. Viruses are not plasmids, although some viruses may have a
similar function in higher organisms as do plasmids in bacteria: they may be able to aid in rapid adaptations to changing
environments. It has been observed that a virus can indeed transfer an adaptive phenotype. The virus that is present in the
fungus (Curvularia protuberata), can induce heat resistance in tropical panic grass (Dichanthelium lanuginosum), allowing
both organisms to grow at high soil temperatures in Yellowstone National Park. This shows that viruses still provide
strategies for rapid adaptation.Fungal isolates cured of the virus are unable to confer heat tolerance, but heat tolerance is
restored after the virus is reintroduced. The virus-infected fungus confers heat tolerance not only to its native monocot host
but also to a eudicot host, which suggests that the underlying mechanism involves pathways conserved between these two
groups of plants.36In fruit flies, wing pigmentation depends on a gene known as yellow. The gene exists in the genome of all
individual fruit flies, but in some it is not active. By analysing the genetic origin of the spots on fruit fly wings, researchers
have discovered a molecular mechanism that explains how new patterns of pigmentation can emerge. The secret appears
to be specific genetic elements that orchestrate where proteins are used in the construction of an insects body. The
segments do not code for proteins, but rather regulate the nearby gene that specifies the pigmentation. As such, these
regulatory DNA segments qualify as VIGEs. The researchers transferred the regulatory DNA segment from a spotted
species (Drosophila biarmipes) into another species not expressing the spot (D. melanogaster), and attached the regulatory
region to a gene for a fluorescent protein. They found that the fluorescent gene was expressed in the spot-free species in
exactly the same patterns as the yellow gene is expressed in the spotted species. By comparing several spotted and
spotfree species, the scientists established that mutation of a regulatory DNA segment led to the expression of the spotted
trait. They discovered that in the species with spotted wings this regulatory segment has multiple binding sites for a protein
that then activates the yellow gene. Spotless species do not have multiple binding sites. 37 The multiplicity of regulatory DNA
segments may argue for an amplification mechanism or targeted integration of the regulatory sequence. That explains why
the same pattern of pigmentation can emerge independently in distantly related species (Darwins analogous variation). The
observed shuttle function of viruses leads me to pose an intriguing question: Were endogenous retroviruses originally
designed to serve as shuttle-vectors to deliver messages from the soma to the germ-line? If yes, then it would put
Lamarckian evolution in an entirely new perspective.
Discussion
The findings of the new biology demonstrate that mainstream scientists are wrong regarding the idea that transposable
elements are the selfish remnants of ancient invasions by RNA viruses. Instead, RNA viruses originate from transposable
elements that were designed as variation-inducing genetic elements (VIGEs). Created kinds were deliberately frontloaded
with several types of controlled and regulated transposable elements to allow them to rapidly invade and adapt to all corners
and crevices of the earth. Due to the redundant character of VIGEs, their controlled regulation may have readily deteriorated
and some of them may now merely cause havoc. The VIGE hypothesis provides elegant explanations for several biological
observations that may otherwise be difficult to interpret within the creationist framework, including the origin of diseases
(RNA viruses) and chromosome rearrangements. The VIGE hypothesis may be a framework for extended creationist
research programs. Some intriguing question can already be raised.Were VIGEs intentionally designed to cause
diseases? No, they were not. It is conceivable that the transposition and integration of VIGEs is not entirely random. The
transposition of VIGEs may have been originally present in the baranome as controlled and regulated elements and
activated upon intrinsic or external triggers. To induce variation in offspring, triggers for the transposition of VIGEs could be
released during meiosis, when the reproductive cells are being produced. The emergence of RNA viruses from VIGEs may
be a result of the Fall, when we were cut of from the regenerating healing power of the designer.Why are some VIGEs
located on the exact same position in primates and humans? Each original baranome must have had a limited number
of VIGEs, some of which we still find on the same location in distinct species. In distinct baranomes, VIGEs may have been
located on the exact same positions (the T-zero location), which then explains why some VIGEs such as ERVs, can be
found in the same location in, for instance, primates and humans. In addition, sequence-dependent integration of VIGEs
may also contribute to this observation.How could Bdelloid rotifers, a group of strictly asexually reproducing aquatic
invertebrates, rapidly form novel species?Asexual production of progeny, as observed in Bdelloids, is found in over one

half of all eukaryotic phyla and is likely to contribute to adaptive changes, as suggested by recent evidence from both
animals and plants.38 The Bdelloids may have been derived from pluripotent baranomes containing numerous DNA
transposons and retro elements, including active LTR retrotransposons containing gag,pol, and env-like open reading
frames.39 These elements are able to reshuffle the genomes and facilitate instant variation and speciation.Do we also
observe remnants of DNA viruses in the mammalian genomes? If not, this supports my idea that RNA viruses emerged
from VIGEs, and implies DNA viruses have a different origin; probably, as with the Mimi-virus 40, they originated from
degenerated bacteria.Why was a class of VIGEs designed with information for protein capsids? The capsid may have
been acquired from the hosts genome or it may have been designed to prevent the RNA molecules from attaching
themselves to, or finding, integrations sites. A very speculative idea may be that these VIGEs were designed to shuttle
information from the soma to the germ-line. One thing is clear, however: creation researchers have loads of work to do.
INFORMATION THEORY
Refuting EvolutionChapter 9
A handbook for students, parents, and teachers countering the latest arguments for evolution
by Jonathan Sarfati, Ph.D., F.M.
Is the design explanation legitimate?
First published in Refuting Evolution, Chapter 9
As pointed out in previous chapters, Teaching about Evolution frequently dismisses creation as unscientific and religious.
Creationists frequently point out that creation occurred in the past, so cannot be directly observed by experimental science
and that the same is true of large-scale evolution. But evolution or creation might conceivably have left some effects that
can be observed. This chapter discusses the criteria that are used in everyday life to determine whether something has
been designed, and applies them to the living world. The final section discusses whether design is a legitimate explanation
for lifes complexity or whether naturalistic causes should be invoked a priori.
How do we detect design?
People detect intelligent design all the time. For example, if we find arrowheads on a desert island, we can assume they
were made by someone, even if we cannot see the designer.1There is an obvious difference between writing by an
intelligent
person,
e.g.
Shakespeares
plays,
and
a
random
letter
sequence
like
WDLMNLTDTJBKWIRZREZLMQCOP.2 There is also an obvious difference between Shakespeare and a repetitive
sequence like ABCDABCDABCD. The latter is an example oforder, which must be distinguished from Shakespeare, which is
an example of specified complexity.We can also tell the difference between messages written in sand and the results of
wave and wind action. The carved heads of the U.S. presidents on Mt Rushmore are clearly different from erosional
features. Again, this is specified complexity. Erosion produces either irregular shapes or highly ordered shapes like sand
dunes, but not presidents heads or writing.Another example is the SETI program (Search for Extraterrestrial Intelligence).
This would be pointless if there was no way of determining whether a certain type of signal from outer space would be proof
of an intelligent sender. The criterion is, again, a signal with a high level of specified complexitythis would prove that there
was an intelligent sender, even if we had no other idea of the senders nature. But neither a random nor a repetitive
sequence would be proof. Natural processes produce radio noise from outer space, while pulsars produce regular signals.
Actually, pulsars were first mistaken for signals by people eager to believe in extraterrestrials, but this is because they
mistook order for complexity. So evolutionists (as are nearly all SETI proponents) are prepared to use high specified
complexity as proof of intelligence, when it suits their ideology. This shows once more how ones biases and assumptions
affect ones interpretations of any data. .3
Life fits the design criterion
Life is also characterized by high specified complexity. The leading evolutionary origin-of-life researcher, Leslie Orgel,
confirmed this:Living things are distinguished by their specified complexity. Crystals such as granite fail to qualify as living
because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. 4Unfortunately, a
materialist like Orgel here refuses to make the connection between specified complexity and design, even though this is the
precise criterion of design.To elaborate, a crystal is a repetitive arrangement of atoms, so is ordered. Such ordered
structures usually have the lowest energy, so will form spontaneously at low enough temperatures. And the information of
the crystals is already present in their building blocks; for example, directional forces between atoms. But proteins and DNA,
the most important large molecules of life, are not ordered (in the sense of repetitive), but have high specified complexity.
Without specification external to the system, i.e., the programmed machinery of living things or the intelligent direction of an
organic chemist, there is no natural tendency to form such complex specified arrangements at all. When their building blocks
are combined (and even this requires special conditions 5), a random sequence is the result. The difference between a
crystal and DNA is like the difference between a book containing nothing but ABCD repeated and a book of
Shakespeare. However, this doesnt stop many evolutionists (ignorant of Orgels distinction) claiming that crystals prove that
specified complexity can arise naturallythey merely prove that order can arise naturally, which no creationist contests.6
Information
The design criterion may also be described in terms of information. Specified complexity means high information content. In
formal terms, the information content of any arrangement is the size, in bits, of the shortest algorithm (program) required to
generate that arrangement. A random sequence could be formed by a short program:
Print any letter at random.
Return to step 1.
A repetitive sequence could be made by the program:
Print ABCD.
Return to step 1.
But to print the plays of Shakespeare, a program would need to be large enough to print every letter in the right place.7
The information content of living things is far greater than that of Shakespeares writings. The atheist Richard Dawkins says:
[T]here is enough information capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it,
three or four times over.8If its unreasonable to believe that an encyclopedia could have originated without intelligence, then
its just as unreasonable to believe that life could have originated without intelligence.Even more amazingly, living things
have by far the most compact information storage/retrieval system known. This stands to reason if a microscopic cell stores
as much information as several sets of Encyclopaedia Britannica. To illustrate further, the amount of information that could
be stored in a pinheads volume of DNA is staggering. It is the equivalent information content of a pile of paperback books
500 times as tall as the distance from earth to the moon, each with a different, yet specific content.9
Machinery in living things

On a practical level, information specifies the many parts needed


to make machines work. Often, the removal of one part can disrupt
the whole machine, so there is a minimum number of parts without
which the machine will not work. Biochemist Michael Behe, in his
book Darwins Black Box, calls this minimum number irreducible
complexity.10 He gives the example of a very simple machine: a
mousetrap. This would not work without a platform, holding bar,
spring, hammer, and catch, all in the right place. If you remove just
one part, it wont work at allyou cannot reduce its complexity
without destroying its function entirely.The thrust of Behes book is
that many structures in living organisms show irreducible
complexity, far in excess of a mousetrap or indeed any man-made
machine. For example, he shows that even the simplest form of vision in any living creature requires a dazzling array of
chemicals in the right places, as well as a system to transmit and process the information. The blood-clotting mechanism
also has many different chemicals working together, so we wont bleed to death from minor cuts, nor yet suffer from clotting
of the entire system.
A simple cell?
Many people dont realize that even the simplest cell is fantastically complexeven the simplest self-reproducing organism
contains encyclopedic quantities of complex, specific information. Mycoplasma genitalium has the smallest known genome
of any free-living organism, containing 482 genes comprising 580,000 base pairs11 (compare 3 billion base pairs in humans,
as Teaching about Evolution states on page 42). Of course, these genes are functional only in the presence of pre-existing
translational and replicating machinery, a cell membrane, etc. But Mycoplasma can only survive by parasitizing other more
complex organisms, which provide many of the nutrients it cannot manufacture for itself. So evolutionists must postulate a
more complex first living organism with even more genes.More recently, Eugene Koonin and others tried to calculate the
bare minimum requirement for a living cell, and came up with a result of 256 genes. But they were doubtful whether such a
hypothetical bug could survive, because such an organism could barely repair DNA damage, could no longer fine-tune the
ability of its remaining genes, would lack the ability to digest complex compounds, and would need a comprehensive supply
of organic nutrients in its environment.12Molecular biologist Michael Denton, writing as a non-creationist skeptic of Darwinian
evolution, explains what is involved:Perhaps in no other area of modern biology is the challenge posed by the extreme
complexity and ingenuity of biological adaptations more apparent than in the fascinating new molecular world of the cell .
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until
it is twenty kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York.
What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we
would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of
materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme
technology and bewildering complexity.Is it really credible that random processes could have constructed a reality, the
smallest element of whicha functional protein or geneis complex beyond our own creative capacities, a reality which is
the very antithesis of chance, which excels in every sense anything produced by the intelligence of man? Alongside the level
of ingenuity and complexity exhibited by the molecular machinery of life, even our most advanced artifacts appear clumsy
.It would be an illusion to think that what we are aware of at present is any more than a fraction of the full extent of biological
design. In practically every field of fundamental biological research ever-increasing levels of design and complexity are
being revealed at an ever-accelerating rate.13For natural selection (differential reproduction) to start, there must be at least
one self-reproducing entity. But as shown above, the production of even the simplest cell is beyond the reach of undirected
chemical reactions. So its not surprising that Teaching about Evolutionomits any discussion of the origin of life, as can easily
be seen from the index. However, this is part of the General Theory of Evolution (molecules to man), 14 and is often called
chemical evolution. Indeed, the origin of the first self-reproducing system is recognized by many scientists as an unsolved
problem for evolution, and thus evidence for a designer.15 The chemical hurdles that non-living matter must overcome to
form life are insurmountable, as shown by many creationist writers. 16
Can mutations generate information?
Even if we grant evolutionists the first cell, the problem of increasing the total information content remains. To go from the
first cell to a human means finding a way to generate enormous amounts of informationbillions of base pairs (letters)
worth. This includes the recipes to build eyes, nerves, skin, bones, muscles, blood, etc. In the section on variation and
evolution, we showed that evolution relies on copying errors and natural selection to generate the required new
information.However, the examples of contemporary evolution presented byTeaching about Evolution are all losses of
information.This is confirmed by the biophysicist Dr Lee Spetner, who taught information and communication theory at Johns
Hopkins University:In this chapter Ill bring several examples of evolution, [i.e., instances alleged to be examples of
evolution] particularly mutations, and show that information is not increased. But in all the reading Ive done in the lifesciences literature, Ive never found a mutation that added information.All point mutations that have been studied on the
molecular level turn out to reduce the genetic information and not to increase it.The NDT [neo-Darwinian theory] is
supposed to explain how the information of life has been built up by evolution. The essential biological difference between a
human and a bacterium is in the information they contain. All other biological differences follow from that. The human
genome has much more information than does the bacterial genome. Information cannot be built up by mutations that lose
it. A business cant make money by losing it a little at a time.17This is not to say that no mutation is beneficial, that is, it helps
the organism to survive. But as pointed out in chapter 2, even increased antibiotic and pesticide resistance is usually the
result of loss of information, or sometimes a transfer of informationnever the result of newinformation. Other beneficial
mutations include wingless beetles on small desert islandsif beetles lose their wings and so cant fly, the wind is less likely
to blow them out to sea.18 Obviously, this has nothing to do with the origins of flight in the first place, which is what evolution
is supposed to be about. Insect flight requires complicated movements to generate the patterns of vortices needed for liftit
took a sophisticated robot to simulate the motion.19
Would any evidence convince evolutionists?
The famous British evolutionist (and Communist) J.B.S. Haldane claimed in 1949 that evolution could never produce
various mechanisms, such as the wheel and magnet, which would be useless till fairly perfect.20 Therefore such machines
in organisms would, in his opinion, prove evolution false. That is, evolution meets one criterion Teaching about
Evolution claims is necessary for science, that there are tests that could conceivably prove it was wrong (the falsifiability
criterion of the eminent philosopher of science, Karl Popper).Recent discoveries have shown that there are indeed wheels
in living organisms. This includes the rotary motor that drives the flagellum of a bacterium, and the vital enzyme that makes
ATP, the energy currency of life.21 These molecular motors have indeed fulfilled one of Haldanes criteria. Also,

turtles,22 monarch butterflies,23 and bacteria24 that use magnetic sensors for navigation seem to fulfil Haldanes other
criterion.I wonder whether Haldane would have had a change of heart if he had been alive to see these discoveries. Most
evolutionists rule out intelligent design a priori, so the evidence, overwhelming as it is, would probably have no effect.
Other marvels of design
The genetic information in the DNA cannot be translated except with many different enzymes, which are themselves
encoded. So the code cannot be translated except via products of translation, a vicious circle that ties evolutionary origin-oflife theories in knots.These include double-sieve enzymes to make sure the right amino acid is linked to the right tRNA. One
sieve rejects amino acids too large, while the other rejects those too small. 25The genetic code thats almost universal to life
on earth is about the best possible, for protecting against errors.26 [See also DNA: marvellous messages or mostly mess?]
The genetic code also has vital editing machinery that is itself encoded in the DNA. This shows that the system was fully
functional from the beginninganother vicious circle for evolutionists. [See also Self-replicating enzymes?]Yet another
vicious circle, and there are many more, is that the enzymes that make the amino acid histidine themselves contain
histidine.The complex compound eyes of some types of trilobites (extinct and supposedly primitive invertebrates) were
amazingly designed. They comprised tubes that each pointed to a different spot on the horizon, and had special lenses that
focused light from any distance. Some trilobites had a sophisticated lens design comprising a layer of calcite on top of a
layer of chitinmaterials with precisely the right refractive indicesand a wavy boundary between them of a precise
mathematical shape.27 The Designer of these eyes is a Master Physicist, who applied what we now know as the physical
laws of Fermats principle of least time, Snells law of refraction, Abbs sine law and birefringent optics.Lobster eyes are
unique in being modeled on a perfect square with precise geometrical relationships of the units. NASA X-ray telescopes
copied this design.28The amazing sonar system of dolphins was discussed in chapter 5. Many bats also have an exquisitely
designed sonar system. The echolocation of fishing bats is able to detect a minnows fin, as fine as a human hair, extending
only 2 mm above the water surface. This fine detection is possible because bats can distinguish ultra-sound echoes very
close together. Man-made sonar can distinguish echoes 12 millionths of a second apart, although with a lot of work this can
be cut to 6 millionths to 8 millionths of a second. But bats relatively easily distinguish ultra-sound echoes only 2 to 3
millionths of a second apart according to researcher James Simmons of Brown University. This means they can distinguish
objects just 3/10ths of a millimeter apartabout the width of a pen line on paper. 29The neural system of a leech uses
trigonometric calculations to work out which muscles to move and by how much. 30From my own specialist field of vibrational
spectroscopy: there is good evidence that our chemical-detecting sense (smell) works on the same quantum mechanical
principles.31
Why should design be unscientific?
The real reason for rejecting the creation explanation is the commitment to naturalism. As shown in chapter 1, evolutionists
have turned science into a materialistic game, and creation/design is excluded by their self-serving rules.32 Therefore,
although Teaching about Evolutiondismisses creation science as unscientific, this appears to be derived more from the
rules of the game than from any evidence.Even some anti-creationist philosophers of science have strongly criticized the
evolutionary scientific and legal establishment over these word games. They rightly point out that we should be more
interested in whether creation is true or false than whether it meets some self-serving criteria for science.33Many of these
word games are self-contradictory, so one must wonder whether their main purpose is to exclude creation at any cost, rather
than for logical reasons. For example, Teaching about Evolution claims on page 55:The ideas of creation science derive
from the conviction that a intelligent designer created the universeincluding humans and other living thingsall at once in
the relatively recent past. However, scientists from many fields have examined these ideas and have found them to be
scientifically insupportable. For example, evidence for a very young earth is incompatible with many different methods of
establishing the age of rocks. Furthermore, because the basic proposals of creation science are not subject to test and
verification, these ideas do not meet the criteria for science.The Teaching about Evolution definition of creation science is
almost right, although creationists following creationist assumptions would claim that different things were created on
different days. However, Teaching about Evolution claims that the ideas of creation science have been examined and found
unsupportable, then they claim that the basic proposals of creation science are not subject to test and verification. So how
could its proposals have been examined (tested!) if they are not subject to test?Of course, it is not true that science has
proved the earth to be billions of years oldsee chapter 8.The historian and philosopher of science Stephen Meyer
concluded:We have not yet encountered any good in principle reason to exclude design from science. Design seems just as
scientific (or unscientific) as its evolutionary competitors .An openness to empirical arguments for design is therefore a
necessary condition of a fully rational historical biology. A rational historical biology must not only address the question,
Which materialistic or naturalistic evolutionary scenario provides the most adequate explanation of biological complexity?
but also the question Does a strictly materialistic evolutionary scenario or one involving intelligent agency or some other
theory best explain the origin of biological complexity, given all relevant evidence? To insist otherwise is to insist that
materialism holds a metaphysically privileged position. Since there seems no reason to concede that assumption, I see no
reason to concede that origins theories must be strictly naturalistic.34
Scientific laws of information and their implicationspart 1
by Werner Gitt
The grand theory of atheistic evolution posits that matter and energy alone have
given rise to all things, including biological systems. To hold true, this theory must
attribute the existence of all information ultimately to the interaction of matter and
energy without reference to an intelligent or conscious source. All biological
systems depend upon information storage, transfer and interpretation for their
operation. Thus the primary phenomenon that the theory of evolution must account
for is the origin of biological information. In this article it is argued that fundamental
laws of information can be deduced from observations of the nature of information.
These fundamental laws exclude the possibility that information, including biological
information, can arise purely from matter and energy without reference to an
intelligent agent. As such, these laws show that the grand theory of evolution cannot
in principle account for the most fundamental biological phenomenon. In addition,
the laws here presented give positive ground for attributing the origin of biological
information to the conscious, wilful action of a designer. The far-reaching
implications of these laws are discussed.
Figure 1. The five levels of information. To fully characterise the concept of
information, five aspects must be consideredstatistics, syntax, semantics,

pragmatics and apobetics. Information is represented (that is, formulated, transmitted, stored) as a language. From a
stipulated alphabet, the individual symbols are assembled into words (code). From these words (each word having been
assigned a meaning), sentences are formed according to the firmly defined rules of grammar (syntax). These sentences are
the bearers of semantic information. Furthermore, the action intended/carried out (pragmatics) and the desired/achieved
goal (apobetics) belong of necessity to the concept of information. All our observations confirm that each of the five levels is
always pertinent for the sender as well as the receiver.In the communication age information has become fundamental to
everyday life. However, there is no binding definition of information that is universally agreed upon by practitioners of
engineering, information science, biology, linguistics or philosophy.There have been repeated attempts to grapple with the
concept of information. The most sweeping formulation was recently put forward by a philosopher: The entire universe is
information.1 Here we will set out in a new direction, by seeking a definition of information with which it is possible to
formulate laws of nature.Because information itself is non-material, 2 this would be the first time that a law of nature (scientific
law) has been formulated for such a mental entity. We will first establish a universal definition for information; then state the
laws themselves; and, finally, we will draw eight comprehensive conclusions.
What is a law of nature?
If statements about the observable world can be consistently and repeatedly confirmed to be universally true, we refer to
them as laws of nature. Laws of nature describe events, phenomena and occurrences that consistently and repeatedly take
place. They are thus universally valid laws. They can be formulated for material entities in physics and chemistry (e.g.
energy, momentum, electrical current, chemical reactions). Due to their explanatory power, laws of nature enjoy the highest
level of confidence in science. The following attributes exhibited by laws of nature are especially significant:Laws of nature
know no exceptions. This sentence is perhaps the most important one for our purposes. If dealing with a real (not merely
supposed) natural law, then it cannot be circumvented or brought down. A law of nature is thus universally valid, and
unchanging. Its hallmark is its immutability. A law of nature can, in principle, be refuteda single contrary example would
end its status as a natural law.
Laws of nature are unchanging in time.
Laws of nature can tell us whether a process being contemplated is even possible or not. This is a particularly important
application of the laws of nature.
Laws of nature exist prior to, and independent of, their discovery and formulation. They can be identified through research
and then precisely formulated. Hypotheses, theories or models are fundamentally different. They are invented by people, not
merely formulated by them. In the case of the laws of nature, for physical entities it is often, but not always, 3 possible to find
a mathematical formulation in addition to a verbal one. In the case of the laws for non-material entities presented here, the
current state of knowledge permits only verbal formulations. Nevertheless, these can be expressed just as strongly, and are
just as binding, as all others.
Laws of nature can always be successfully applied to unknown situations. Only thus was the journey to the moon, for
example, possible.
Due to their explanatory power, laws of nature enjoy the highest level of confidence in science.
When we talk of the laws of nature, we usually mean the laws of physics (e.g. the second law of thermodynamics, the law of
gravity, the law of magnetism, the law of nuclear interaction) and the laws of chemistry (e.g. Le Chateliers Principle of least
restraint). All these laws are related exclusively to matter. But to claim that our world can be described solely in terms of
material quantities is failing to acknowledge the extent of ones perception. Unfortunately many scientists follow this
philosophy of materialism (e.g. Dawkins, Kppers, Eigen 4), remaining within this self-imposed boundary of insight. But our
world also includes non-material concepts such as information, will and consciousness. This article (described more
comprehensively in ref. 1) attempts, for the first time, also to formulate laws of nature for non-material quantities. The same
scientific procedures used for identifying laws of nature are also used for identifying laws governing non-material entities.
Additionally, these laws exhibit the same attributes as listed above for the laws of nature. Therefore they fulfil the same
conditions as the laws of nature for material quantities, and possessing, consequently, a similar power of inference. Alex
Williams describes this concept as a revolutionary new understanding of information. 5 In an in-depth personal discussion
with Dr Bob Compton (Idaho, U.S.A.), he proposed to name the laws of nature on information the Scientific Laws of
Information (SLI) in order to distinguish them from the physical laws. This positive suggestion is to be taken seriously since
it takes account of the shortcomings of the materialistic view. I have therefore decided to use the term here.
What is information?
Information is not a property of matter!
The American mathematician Norbert Wiener made the oft-cited statement: Information is information, neither matter nor
energy.6 With this he acknowledged a very significant thing: information is not a material entity. Let me clarify this important
property of information with an example. Imagine a sandy stretch of beach. With my finger I write a number of sentences in
the sand. The content of the information can be understood. Now I erase the information by smoothing out the sand. Then I
write other sentence in the sand. In doing so I am using the same matter as before to display this information. Despite this
erasing and rewriting, displaying and destroying varying amounts of information, the mass of the sand did not alter at any
time. The information itself is thus massless. A similar thought experiment involving the hard drive of a computer quickly
leads to the same conclusion.Norbert Wiener has told us what information is not; the question of what information really is,
then, will be answered in this article.Because information is a non-material entity, its origin is likewise not explicable by
material processes. What causes information to come into existence at allwhat is the initiating factor? What causes us to
write a letter, a postcard, a note of congratulations, a diary entry or a file note? The most important prerequisite for the
construction of information is our own will, or that of the person who assigned the task to us. Information always depends
upon the will of a sender who issues the information. Information is not constant; it can be deliberately increased and can be
distorted or destroyed (e.g. through disturbances in transmission).
In summary: Information arises only through will (intention and purpose).
A definition of universal information
Technical terms used in science are sometimes also used in everyday language (e.g. energy, information). However, if one
wants to formulate laws of nature, then the entities to which they apply must be unambiguous and clear cut. So one always
needs to define such entities very precisely. In scientific usage, the meaning of a term is in most cases considerably more
narrowly stated than its range of meaning in everyday usage (i.e. it is a subset of). In this way, a definition does more than
just assign a meaning; it also acts to contain or restrict that meaning. A good natural-law definition is one that enables us to
exclude all those domains (realms) in which laws of nature are not applicable. The more clearly one can establish the
domain of definition, the more precise (and furthermore certain) the conclusions which can be drawn.Exampleenergy: In
everyday language we use the word energy in a wide range of meanings and situations. If someone does something with
great diligence, persistence and focused intensity, we might say he applies his whole energy to the task. But the same
word is used in physics to refer to a natural law, the law of energy. In such a context, it becomes necessary to substantially

narrow the range of meaning. Thus physics defines energy as the capacity to do work, which is force x distance.7 An
additional degree of precision is added by specifying that the force must be calculated in the direction of the distance. With
this, one has come to an unambiguous definition and has simultaneously left behind all other meanings in common usage.
Information: namely an encoded, symbolically represented message conveying expected action and intended
purpose.
The same must now be done for the concept of information. We have to say, very clearly, what information is in our naturallaw sense. We need criteria in order to be able unequivocally to determine if an unknown system belongs within the domain
of our definition or not. The following definition permits a secure allocation in all cases:Information is always present when all
the following five hierarchical levels are observed in a system: statistics, syntax, semantics, pragmatics and apobetics.If this
applies to a system in question, then we can be certain that the system falls within the domain of our definition of
information. It therefore follows that for this system all four laws of nature about information will apply.
The five levels of universal information (figure 1)
Statistics. In considering a book, a computer program or the genome of a human being we can ask the following questions:
How many letters, numbers and words does the entire text consist of? How many individual letters of the alphabet (e.g. a, b,
c z for the Roman alphabet, or G, C, A and T for the DNA alphabet) are utilized? What is the frequency of occurrence of
certain letters and words? To answer such questions it is irrelevant whether the text contains anything meaningful, is pure
nonsense, or just randomly ordered sequences of symbols or words. Such investigations do not concern themselves with
the content; they involve purely statistical aspects. All of this belongs to the first and thus bottom level of information: the
level of statistics. The statistics level can be seen as the bridge between the material and the non-material world. (This is the
level on which Claude E. Shannon developed his well-known mathematical concept of information.8)
Figure 2. The first five verses of Genesis 1 written in a special code.
Syntax. If we look at a text in any particular language, we see that
only certain combinations of letters form permissible words of that
particular language. This is determined by a pre-existing, wilful,
convention. All other conceivable combinations do not belong to
that languages vocabulary. Syntax encompasses all of the
structural characteristics of the way information is represented. This
second level involves only the symbol system itself (the code) and
the rules by which symbols and chains of symbols are combined
(grammar, vocabulary). This is independent of any particular
interpretation of the code.Semantics. Sequences of symbols and
syntactic rules form the necessary pre-conditions for the
representation of information. But the critical issue concerning
information transmission is not the particular code chosen, nor the
size, number or form of the lettersnor even the method of
transmission. It is, rather, the semantics (Greek: semantiks =
significant meaning), i.e. the message it containsthe proposition,
the sense, the meaning.Information itself is never the actual object
or act, neither is it a relationship (event or idea), but encoded
symbols merely represent that which is discussed. Symbols of
extremely different nature play a substitutionary role with regard to
the reality or a system of thought. Information is always an abstract
representation of something quite different. For example, the
symbols in todays newspaper represent an event that happened
yesterday; this event is not contemporaneous; moreover, it might
have happened in another country and is not at all present where
and when the information is transmitted. The genetic words in a
DNA molecule represent the specific amino acids that will be used
at a later stage for synthesis of protein molecules. The symbols of
figure
2
represent
what
happened
on
day
1
.Pragmatics. Information invites action. In this context it is irrelevant
whether the receiver of information acts in the manner desired by
the sender of the information, or reacts in the opposite way, or
doesnt do anything at all. Every transmission of information is
nevertheless associated with the expectation, from the side of the
sender, of generating a particular result or effect on the receiver.
Even the shortest advertising slogan for a washing powder is
intended to result in the receiver carrying out the action of
purchasing this particular brand in preference to others. We have
thus reached a completely new level at which information operates,
which we call pragmatics (Greek pragma = action, doing). The
sender is also involved in action to further his desired outcome (more sales/profit), e.g. designing the best message
(semantics) and transmitting it as widely as possible in newspapers, TV, etc.Apobetics. We have already recognized that for
any given information the sender is pursuing a goal. We have now reached the last and highest level at which information
operates: namely, apobetics (the aspect of information concerned with the goal, the result itself). In linguistic analogy to the
previous descriptions the author has here introduced the term apobetics (from the Greek apobeinon = result,
consequence). The outcome on the receivers side is predicated upon the goal demanded/desired by the senderthat is,
the plan or conception. The apobetics aspect of information is the most important of the five levels because it concerns the
question of the outcome intended by the sender.In his outstanding articles Inheritance of biological information 5, Alex
Williams has explained this five-level concept by applying it to biological information. Using the last four of the five levels, we
developed an unambiguous definition of information: namely an encoded, symbolically represented message conveying
expected action and intended purpose. We term any entity meeting the requirements of this definition as universal
information (UI).
Scientific laws of information (SLI)
In the following we will describe the four most important laws of nature about information.9
SLI-110

A material entity cannot generate a non-material entity


In our common experience we observe that an apple tree bears apples, a pear tree yields pears, and a thistle brings forth
thistle seeds. Similarly, horses give birth to foals, cows to calves and women to human babies. Likewise, we can observe
that something which is itself solely material never creates anything non-material. The universally observable finding of SLI
1 can now be couched in somewhat more specialized form by arriving at SLI2.
SLI-2
Universal information is a non-material fundamental entity
The materialistic worldview has widely infiltrated the natural sciences such that it has become the ruling paradigm. However,
this is an unjustified dogma. The reality in which we live is divisible into two fundamentally distinguishable realms: namely,
the material and the non-material. Matter involves mass, which is weighable in a gravitational field. In contrast, all nonmaterial entities (e.g. information, consciousness, intelligence and will) are massless and thus have zero weight. Information
is always based on an idea; it is thus also massless and does not arise from physical or chemical processes. Information is
also not correlated with matter in the same way as energy, momentum or electricity is. However, information is stored,
transmitted and expressed through matter and energy.
The distinction between material and non-material entities
Necessary Condition (NC): That a non-material entity must be massless (NC: m = 0) is indeed a necessary condition, but it
is not sufficient to assign it as non-material. To be precise, the sufficient condition must also be met.Sufficient Condition
(SC): An observed entity can be judged to be non-material if it has no physical or chemical correlation with matter. This is
always the case if the following four conditions are met:
SC1: The entity has no physical or chemical interaction with matter.
SC2: The entity is not a property of matter.
SC3: The entity does not originate in pure matter.
SC4: The entity is not correlated with matter.
Photons are massless particles and they are a good contrast to the SC because they do interact with matter and can
originate from and be correlated with matter.Information always depends on an idea; it is massless and does not originate
from a physical or chemical process.11 The necessary condition (NC: m = 0) and also all four sufficient conditions (SC1 to
SC4) are also fulfilled, and therefore universal information is a non-material entity. The fact that it requires matter for storage
and transportation does not turn it into matter. Thus we can state:Universal Information is a non-material entity because it
fulfils both necessary conditions:
it is massless; and,
it is neither physically nor chemically correlated with matter.Occasionally it is claimed that it is a physical (and thereby a
material) entity. But as presented under SLI-1, information is clearly a non-material entity.There is another very powerful
justification for stating that information cannot be a physical quantity. The SI System of units has seven base units: mass,
length, electric current, temperature, amount of substance, luminous intensity and time. All physical quantities can be
expressed in terms of one of these base units (e.g. area = length x length) or by a combination (by multiplication or division)
of several base units (e.g. momentum = mass x length / time). This is not possible in the case of information and therefore
information is not a physical magnitude!
SLI-3
Universal information cannot be created by statistical processes
The grand theory of evolution would gain some empirical support if it could be demonstrated, in a real experiment, that
information could arise from matter left to itself without the addition of intelligence. Despite the most intensive worldwide
efforts this has never been observed. To date, evolutionary theoreticians have only been able to offer computer simulations
that depend upon principles of design and the operation of pre-determined information. These simulations do not
correspond to reality because the theoreticians smuggle their own information into the simulations.
SLI-4
Universal information can only be produced by an intelligent sender
The question here is: What is an intelligent sender? Several attributes are required to define an intelligent sender.
Definition D1: An intelligent sender as mentioned in SLI-4
is conscious
has a will of its own12
is creative
thinks autonomously
acts purposefully
SLI-4 is a very general law from which several more specific laws may be derived. We know the Maxwell equations from
physics. They describe, in a brilliant generalization, the relationship between changing electric and magnetic fields. But for
most practical applications these equations are far too complex and cumbersome and for this reason we use more specific
formulations, such as Ohms Law, Coulombs Law or the induction law. Similarly, in the following section we will present four
more specific formulations of SLI-4 (SLI-4a to 4d) that are easier to use for our practical conclusions.
SLI-4a
Every code is based upon a mutual agreement between sender and receiverThe essential characteristic of a code symbol
(character) is that it was at one point in time freely defined. The set of symbols so created represents all allowed symbols
(by definition). They are structured in such a way as to fulfil, as well as possible, their designated purpose (e.g. a script for
the blind such as Braille must be sufficiently palpable; musical symbols must be able to describe the duration and pitch of
the notes; chemical symbols must be able to designate all the elements). An observed signal may give the impression that it
is composed of symbols, but if it can be shown that the signal is a physical or chemical property of the system then the
fundamental free mutual agreement attribute is missing and the signal is not a symbol according to our definition.13
SLI-4b
There is no new universal information without an intelligent sender
The process of the formation of new information (as opposed to simply copied information) always depends upon
intelligence and free will. A sequence of characters are selected from an available, freely defined set of symbols such that
the resulting string of characters represents (all five levels of) information. Since this cannot be achieved by a random
process, there must always be an intelligent sender. One important aspect of this is the application of will, so that we may
also say: Information cannot be created without a will.
SLI-4c
Every information transmission chain can be traced back to an intelligent sender 14It is useful to distinguish here between
the original and the intermediate sender. We mean by the original sender the author of the information, and he
must always be an individual equipped with intelligence and a will. If, after the original sender, there follows a machine-aided

chain consisting of several links, the last link in the chain might be mistaken for the originator of the message. Since this link
is onlyapparently the sender, we call this the intermediate sender (but it is not the original one!).The original sender is often
not visible: in many cases the author of the information is not or no longer visible. It is not in contradiction to the requirement
of observability when the author of historical documents is no longer visiblein such a case he was, however, observable
once upon a time. Sometimes the information received has been carried via several intermediate links. Here, too, there
must have been an intelligent author at the beginning of the chain. Take the example of a car radio: we receive audible
information from the loud speakers, but these are not the actual source; neither is the transmission tower that also belongs
to the transmission chain. An author (an intelligent originator) who created the information is at the head of the chain. In
general we can say that there is an intelligent author at the beginning of every information transmission chain.The actual
(intermediate) sender may not be an individual: we could gain the impression that, in systems with machine-aided
intermediate links, that the last observed member is the sender:The user of a car auto-wash can only trace the wash
program back to the computerbut the computer is only the intermediatesender; the original sender (the programmer) is
nowhere to be seen.The internet-surfer sees all kinds of information on his screen, but his home computer is not the original
sender, but rather someone who is perhaps at other end of the world has thought out the information and put it on the
internet.It is by no means different in the case of the DNA molecule. The genetic information is read off a material substrate,
but this substrate is not the original sender; rather, it is only the intermediate sender.It may seem obvious that the last
member of the chain is the sender because it seems to be the only discernible possibility. But it is never the case in a
system with machine-aided intermediate links that the last member is the original sender (= author of the information)it is
an intermediate sender. This intermediate sender may not be an individual, but rather only part of a machine that was
created by an intelligence. Individuals can pass on information they have received and in so doing act as intermediate
senders. However, they are in actuality only intermediate senders if they do not modify the information. If an intermediate
changes the information, he may then be considered the original sender of a new piece of information.Even in the special
case where the information was not transmitted via intermediaries, the author may remain invisible. We find in Egyptian
tombs or on the obelisks numerous hieroglyphic texts, but the authors are nowhere to be found. No one would conclude that
there had been no author.
SLI-4d
Attributing meaning to a set of symbols is an intellectual process requiring intelligenceWe have now defined the five levels
(statistics, syntax, semantics, pragmatics and apobetics) at which universal information operates. Using SLI-4d we can
make the following general observation: these five aspects are relevant for both the sender and the receiver.Origin of
information: SLI-4d describes our experience of how any information comes into being. Firstly, we draw on a set of symbols
(characters) that have been defined according to SLI-4a. Then we use one symbol after another from the set to create units
of information (e.g. words, sentences). This is not a random process, but requires the application of intelligence. The sender
has knowledge of the language he is using and he knows which symbols he needs in order to create his intended meaning.
Furthermore, the connection between any given symbol and meaning is not originally determined by laws of physics or
energy. For example, there is nothing physically about the three letters d, o, g that necessarily originally caused it to be
associated with mans much loved pet. The fact that there are other words for dog in other languages demonstrates that
the association between a word and its meaning is mental rather than physical/energetic. In other words, the original
generation of information is an intellectual process.Finally, we make three remarks that have fundamental
significance:Remark R1: Technical and biological machines can store, transmit, decode and translate information without
understanding the meaning and purpose.Remark R2: Information is the non-material basis for all technological systems and
for all biological systems.There are numerous systems that do not possess their own intelligence but nevertheless can
transfer or store information or steer processes. Some such systems are inanimate (e.g. networked computers, process
controls in a chemical factory, automatic production lines, car auto-wash, robots); others are animate (e.g. cell processes
controlled by information, bee waggle dance).It is important to recognize that biological information differs from humanly
generated information in three essential ways:In living systems we find the highest known information density. 15The
programs in living systems obviously exhibit an extremely high degree of sophistication. No scientist can explain the
program that produces an insect that looks like a withered leaf. No biologist understands the secret of an orchid blossom
that is formed and coloured like a female wasp and smells like one, too. We are able to think, feel, desire, believe and
hope. We can handle a complex thing such as language, but we are aeons away from understanding the information control
process that develop the brain in the embryo. Biological information displays a sophistication that is unparalleled in human
information.No matter how ingenious human inventions and programs may be, it is always possible for others to understand
the underlying ideas. For example, during World War II, the English succeeded, after considerable effort, in understanding
completely the German Enigma coding machine which had fallen into their hands. From then on it was possible to decode
German radio messages. However, most of the ingenious ideas and programs we find in living organisms are hardly, or at
best only partly, understood by us at all. To make an exact replica is impossible.Remark R3: The storage and transmission
of information requires a material medium.Imagine a piece of information written on a blackboard. Now wipe the board with
a duster. The information has vanished, even though all the particles of chalk are still present. The chalk in this case was the
necessary material medium but the information was represented by theparticular arrangement of the particles. And this
arrangement did not come about by chanceit had a mental origin. The same information could have been
stored/transmitted in Indian smoke signals through the arrangement of puffs of smoke, or in a computers memory through
magnetized domains. One could even line up an array of massive rocks into a Morse code pattern. So, clearly,
the amount or type of matter upon which the information resides is not the issue. Even though information requires a
material substrate for storage/transmission, information is not a property of matter. In the same way, the information in living
things resides on the DNA molecule. But it is no more an inherent property of the physics and chemistry of DNA than the
blackboards message was an intrinsic property of chalk.
Conclusion
All these four laws of nature about information have arisen from observations in the real world. None of them has been
falsified by way of an observable process or experiment.The grand theory of atheistic evolution must attribute the origin of all
information ultimately to the interaction of matter and energy, without reference to an intelligent or conscious source. A
central claim of atheistic evolution must therefore be that the macro-evolutionary processes that generate biological
information are fundamentally different from all other known information-generating processes. However, the natural laws
described here apply equally in animate and inanimate systems and demonstrate this claim to be both false and absurd.

Implications of the scientific laws of informationpart 2

by Werner Gitt
In the past there were so-called perpetual motion experts. These were inventors and tinkerers who wanted to build a
machine that would run continuously without the supply of energy. The discovery of the law of conservation of energy (a law
of nature) brought all efforts to solve this challenge to a halt because aperpetuum mobile is an impossible machine. Such a
machine will never be built, as the laws of nature make it impossible. Evolution could only occur if the possibility existed that
information could arise by itself out of matter. Those who believe that evolution is a plausible concept believe in a
perpetuum mobile of information. If there were laws of nature that preclude a perpetuum mobile of this kind, the theory of
evolution would be disproved. Such laws of nature actually exist, and I have presented these at many universities
throughout of the world. The concept of this theory of information is explained in the first article (part I) in this issue. There I
enumerated four scientific laws of information arising from observations in the real world. None of them has been falsified by
way of an observable process or experiment. In this article, eight far-reaching conclusions will be drawn.
Eight comprehensive conclusions
Having firmly established the domain of our definition of information in part 1, and familiarized ourselves with the laws of
nature about information derived from experienceknown as scientific laws of information (SLI; see figure 1)we can now
zero in on effectively applying them. Hereafter the term information will be used when referring to universal information.
There are eight very far-reaching conclusions that answer fundamental questions. All scientific thought and practice reaches
a limit beyond which science is inherently unable to take us. This situation is no exception. But some of our questions
involve matters beyond this limiting boundary and so to successfully transcend it we need a higher source of knowledge. We
will proceed in the following sequential manner:
Set out the (briefly formulated) conclusion itself.
SLI-1: A material entity cannot generate a non-material entity.
SLI-2: Universal information is a non-material fundamental entity.
SLI-3: Universal information cannot be created by statistical processes.
SLI-4: Universal information can only be produced by an intelligent sender.
4a: Every code is based upon a mutual agreement between sender and receiver.
4b: There is no new universal information without an intelligent sender.
4c: Every information transmission chain can be traced back to an intelligent sender.
4d: Attributing meaning to a set of symbols is an intellectual process requiring intelligence.
Figure 1. The four most important laws of nature about information known as scientific laws of information (SLI)
1. A Designer exists; refutation of atheism
Because it can be established that all forms of life contain a code (DNA, RNA), as well as all of the other levels of
information, we are within the domain of our definition of information.
We can therefore conclude that:

There
must
be
an
intelligent
sender!
[Applying SLI-4]
Basis for this conclusion
Because there has never been a process in the material world, demonstrable through observation or experiment, in which
information has arisen by without prior intelligence, then that also must be valid for all the information present in living things.
Furthermore, what we do observe about informationnamely that it intrinsically depends upon an original act of intelligence
to construct it, as defined by SLI-4dexcludes the possibility of information coming from non-intelligence. Thus SLI-4b
requires here, too, an intelligent author who wrote the programs. Conclusion 1 is therefore also a refutation of atheism.
The top of figure 2 outlines the realm that is, in principle, inaccessible to natural science; namely: Who is the message
sender? To answer that the sender cannot exist because the methods of human science (scientific boundary) cannot
perceive him, both misapplies science and is untenable according to the laws of information. The requirement that there
must be a personal sender exercising his own free will cannot be relinquished.
2. There is only one designer , who is all knowing and eternal
The information encoded in DNA far exceeds all our current technologies. Hence, no human being could possibly qualify as
the sender, who must therefore be sought outside
of our visible world.
We can conclude that:
There is only one sender, who must not only be
exceptionally intelligent but must possess an
infinitely large amount of information and
intelligence, i.e. he must be omniscient all
knowing), and beyond that must also be eternal.
[Applying SLI-1, SLI-2, SLI-4b]
Basis for this conclusion
Figure 2. The origin of life. If one considers living
things as unknown systems that can be analysed
with the help of natural laws, then one finds all
five levels of the definition of information:
statistics (here left off for simplicity), syntax,
semantics, pragmatics and apobetics. In
accordance with the natural laws of information,
the origin of any information requires a sender
equipped with intelligence and will. The fact that
the sender in this case is not observable is not in
contradiction to these laws. In a huge library with
thousands of volumes, the authors are also not
visible; but no one would maintain that there was
no author for all this information. According to
SLI-4b, at the beginning of every chain of
information there is an intelligent sender. When
one applies this to biological information, then
here, too, there must an intelligent author of the
information. In DNA molecules we find the

highest density of information known to us.1 Because of SLI-1, no conceivable processes in the material realm qualify as the
source of this information. Humans, who can, of course, generate information (e.g. letters, books), are also obviously
excluded as the source of this biological information. This leaves only a sender who operated outside of our normal physical
world. After a lecture at a university about biological information and the necessary sender, a young lady student said to me:
I can tell where you were heading when you spoke of an intelligent senderyou meant designer . I can accept that as far
as it goes; without a sender, that is, without a designer, it wouldnt work. But who informed him so that He could program the
DNA molecules? Two explanations spring to mind:
Explanation a): Imagine that this designer was considerably more intelligent than we are, but nevertheless limited. Lets
assume furthermore that he had so much intelligence (thus information) at his disposal that he was able to program all
biological systems. The obvious question then is: who gave him this information and who taught him? This would require a
higher information-giver I1, that is, a super-designer, who knew more than the designer. If I1 knew more than the desiner,
but was also limited, then he would in turn require an information-giver I 2i.e. a super-super-designer. So this line of
reasoning leads to an extension of this seriesI3, I4 to Iinfinity. One would require an infinite number of designers, such that
in this long chain every n 1 th deity always knew more than the n th. Only once one reached the I infinity super-super-super .
designer , could we say such a designer would be unlimited and all knowing. However, traversing an infinite is impossible
(whether it is a temporal, spatial or, as in this example, an ontological infinity) and so this explanation is unsatisfactory.
Explanation b): It is more simple and satisfying to assume only a single sendera prime mover, an ultimate designer. But
then one would need to also assume that such a designer is infinitely intelligent and in command of an infinite amount of
information. So he must be all knowing(omniscient).Which of the explanations a) and b) is correct? Both are logically
equivalent. Thus we must make a decision that is not derived from the SLI based on the following considerations. In reality,
there is no such thing as an actual infinite number of anything. The number of atoms in the universe is unimaginably vast,
but nevertheless finite, and thus in principle able to be counted. The total number of people, ants, or grains of wheat that
have ever existed is also vast, but finite. Although infinity is a useful mathematical abstraction, the fact is that in reality there
can be no such thing as an infinite number of anything that can be reached by counting for long enough. Thus explanation
a) fails the test of plausibility, leaving only explanation b). That means there is only one sender. But this one sender must
therefore be all knowing. This conclusion is a consequence of consistently applying the laws of nature about information.
What does it mean that the designer (the author of biological information, the designer), is infinite? It means that for Him
there is no question that He cannot answer, and He knows all things. Not merely about present and the past; even the future
is not hidden from Him. But if He knows all thingseven beyond all restrictions of timethen He Himself must be eternal.
3. The designer is immensely powerful
Because the sender:

ingeniously
encoded
the
information
into
the
DNA
molecules,
must have designed the complex bio-machinery that decodes the information and carries out all the processes of
biosynthesis,
and
created all the details of the original construction and reproductive capacities of all living things,
We can conclude that:
The sender accomplished his purpose and, therefore, he must be powerful.
Basis for this conclusion
In conclusion 2, we determined on the basis of laws of nature that the sender must be all knowing and eternal. Now we
consider the question of the extent of His power. Power encompasses all that which would be described under headings
such as strength, creativity, capability and might. Power of this sort is absolutely necessary in order to have created all living
things.Because of His infinite knowledge, the sender knows, for example, how DNA molecules can be programmed. But this
knowledge is not sufficient to fashion such molecules in the first place. 3 Taking the step from mere knowledge to practical
application requires the capacity to be able to build all the necessary biomachinery in the first place. Research enables us to
observe these hardware systems. But we do not see them come about other than through a coordinated process of
cellular replication which requires the same biomachinery to transmit and carry out the replication programs. Thus they had
to originally be constructed by the sender. He had the task of creating the immense variety of all the basic biological types
(created kinds), including the construction specifications for their biological machinery. There are no physio-chemical
tendencies in raw matter for complex information-bearing molecules to form spontaneously. Without creative power, life
would not have been possible.The obvious question here is the same as in conclusion 2: who gave Him this power? This
would require a higher power-giver, P1, that is, a super-designer, who has more than the desiner. If we proceed as shown
before according to explanation a) and b), we come to the conclusion that the sender must be all powerful.
4. The designer is non-material
Because information is a non-material fundamental entity, it cannot originate from a material one.
We can therefore conclude that:
The
sender
must
have
a
non-material
component
(spirit)
to
his
nature.
[Applying SLI-1, SLI-2]
Basis for this conclusion
Unaided matter has never been observed to generate information in the natural-law sense, (i.e. with all five levels: statistics,
syntax, semantics, pragmatics, apobetics). Information is a non-material entity and therefore requires for its origin a nonmaterial source. We have already reasoned our way to some characteristics of the sender. Now we have a further one; he
must be of a non-material nature, or at least must possess a non-material component to his nature.
5. No human being without a soul: refutation of materialism
Because people have the ability to create information, this cannot originate from our material portion (body).
We can therefore conclude that:
Each
person
must
have
a
non-material
component
(spirit,
soul).
[Applying SLI-1, SLI-2]
Basis for this conclusion
Evolutionary biology is locked into an exclusively materialistic paradigm. Reductionism (in which explanations are limited
exclusively to the realm of the material) has been elevated to a fundamental principle within the evolutionary paradigm. With
the aid of the laws of information, materialism may be refuted as follows: We all have the capacity to create new information.
We can put our thoughts down in letters, essays and books, or carry on creative conversations and give lectures. 5 In the
process, we are producing a non-material entity, namely information. (The fact that we need a material substrate to store
and transfer information has no bearing on the nature of information itself.) From this we can draw a very important
conclusion: namely that besides our material body we must have a non-material component. The philosophy of materialism,
which found its strongest expression in Marxism-Leninism and communism, can now be scientifically refuted with the help of
the scientific laws about information.

6. Big bang is impossible


Since information is a non-material entity, the assertion that the universe arose solely from matter and energy (scientific
materialism)
is
demonstrably
false.6
[Applying SL1-2]
Basis for this conclusion
It is widely asserted today that the universe owes its origin to a primeval explosion in which only matter and energy was
available. Everything that we experience, observe and measure in our world is, according to this view, solely the result of
these two physical entities. Energy is clearly a material entity, since it is correlated with matter through Einsteins
mass/energy equivalence relationship E = mc2. Is this big bang theory just as refutable as a perpetual motion machine?
Answer: YES, with the help of the scientific laws about information. In our world we find an abundance of information such
as in the cells of all living things. According to SLI-1, information is a non-material entity and therefore cannot possibly have
arisen from unaided matter and energy. Thus the common big bang worldview is false.
7. No evolution
Since
biological information (the fundamental component of all life) originates only from an intelligent sender, and
all theories of chemical and biological evolution require that information must have originated solely from matter and energy
(no sender),
we conclude that:
All
theories
or
concepts
of
chemical
and
biological
evolution
(macroevolution)
are
false.
[Applying SLI-1, SLI-2, SLI-4b, SLI-4d]
Basis for this conclusion
Judging by its worldwide following, evolution has become probably the most widespread teaching of our time. In accordance
with its basic precepts, we see an ongoing attempt to explain all life on a purely physical/chemical plane (reductionism). The
reductionists prefer to think of a seamless transition from the non-living to the living. 7 With the help of the laws of information
we can reach a comprehensive and fundamental conclusion: the idea of macroevolutioni.e. the journey from chemicals to
primordial cell to manis false. Information is a fundamental and absolutely necessary factor for all living things. But all
informationand living systems are not excludedmust necessarily have a non-material source. The evolutionary model, in
the light of the laws of information, shows itself to be an intellectual perpetual motion machine.Now the question arises:
where do we find the sender of the information stored within the DNA molecules? We dont observe him, so did this
information somehow come about in a molecular biological fashion?The answer is the same as that in the following cases:
Consider the wealth of information preserved in Egypt in hieroglyphics. Not a single stone allows us to see any part of the
sender. We only find these footprints of his or her existence chiselled into stone. But no one would claim that this
information arose without a sender and without a mental concept.In the case of two connected computers exchanging
information and setting off certain processes, there is also no trace of a sender. However, all the information concerned also
arose at some point from the thought processes of one (or more) programmers.8The information in DNA molecules is
transferred to RNA molecules; this occurs in an analogous fashion to a computer transferring information to another
computer. In the cell, an exceptionally complex system of biomachinery is at work which translates the programmed
commands in an ingenious fashion. But we see nothing of the sender. However, to ignore him would be a scientifically
untenable reductionism.We shouldnt be surprised to find that the programs devised by the sender of biological information
are much more ingenious than all of our human programs. After all, we are here dealing with (as already explained in
conclusion 2) a sender of infinite intelligence. The desiners program is so ingeniously conceived that it even permits a wide
range of adaptations to new circumstances. In biology, such processes are referred to as microevolution. However, they
have nothing to do with an actual evolutionary process in the way this word is normally used, but are properly understood as
parameter optimizations within the same kind.
In brief: The laws of information exclude a macro-evolution of the sort envisaged by the general theory of evolution.
By contrast, microevolutionary processes (= programmed genetic variation), with their frequently wide-ranging adaptive
processes within a kind, are explicable with the help of ingenious programs instituted by the designer.
8. No life from pure matter
Because the distinguishing characteristic of life is a non-material entity (namely information) matter cannot have given rise to
it.
From this we conclude that:
There is no process inherent within matter alone that leads from non-living chemicals to life. No purely material processes,
whether
on
the
earth
or
elsewhere
in
the
universe,
can
give
rise
to
life.
[Applying SLI-1]
Basis for this conclusion
Proponents of evolutionary theory assert that Life is a purely material phenomenon, which will arise whenever the right
conditions are present. However, the most universal and distinguishing characteristic of lifeinformationis of a nonmaterial nature. Thus we can apply scientific law SLI-1, which says: A purely material entity cannot generate a non-material
entity.Figure 3 shows an ant with a microchip. Microchips are the storage elements of present-day computers and they
represent matter plus information. The ant contains one material part
(matter) and two non-material parts (information and life).We
repeatedly hear of the discovery of water somewhere in our planetary
system (e.g. on Jupiters moon Europa), or that carbon-containing
substances have been found somewhere in our galaxy. These
announcements are promptly followed by speculations that life could
have developed there. This repeatedly reinforces the impression that
so long as the necessary chemical elements or molecules are present
on some astronomical body, and certain astronomical/physical
conditions are fulfilled, one can more or less count on life being there.
But as we have shown with the help of two laws, this is impossible.
Even under the very best chemical conditions, accompanied by
optimal physical conditions, there would still be no hope of life
developing.Figure 3. Ant carrying a microchip. Both the ant and the
microchip contain information, a non-material entity, that cannot be
generated by a material entity and which points to intelligent, creative input. The ant, moreover, contains two non-material
parts: information and life. (From: Werkbild Philips, with the kind permission of Valvo Unternehmensbereich Bauelemente,
of Philips GmbH, Hamburg).Since the phenomenon of life ultimately requires something non-material, every kind of living

thing required a mind as its ultimate initiator. The four Australian scientists Don Batten, Ken Ham, Jonathan Sarfati and Carl
Wieland thus correctly state: Without intelligent, creative input, lifeless chemicals cannot form themselves into living things.
The idea that they can is the theory of spontaneous generation, disproved by the great creationist founder of microbiology,
Louis Pasteur.9 With this new type of approach, applying the laws of information, Conclusions 7 and 8 have both shown us
that we can exclude the spontaneous origin of life in matter.
Conclusion
No one has ever observed water flowing uphill. Why are there no exceptions to this? Because there is a law of nature that
universally excludes this process from happening. Many plausible arguments have been raised against the teachings of
atheism, materialism, evolution and the big bang worldview. But if it is possible to find scientific laws that contradict these
ideas, then, since scientific laws have the highest degree of scientific credibility possible, we will have scientifically falsified
them. We will have done so just as effectively as the way in which perpetual motion machines (those which supposedly run
forever without any energy from outside) have been shown to be impossible through the application of scientific laws.
This is precisely what we have demonstrated in this paper. We have presented four scientific laws about information. 10 From
these we can generate comprehensive conclusions about te designer, the origin of life, and humanity. With the help of laws
of information we have been able to refute all of the following:
The purely materialistic approach in the natural sciences.
All current notions of evolution (chemical, biological).
Materialism (e.g. man as purely matter plus energy).
The big bang as the cause of this universe.
Atheism.
Variation, information and the created kind
by Dr Carl Wieland
Summary
All observed biological changes involve only conservation or decay of the underlying genetic information. Thus we do not
observe any sort of evolution in the sense in which the word is generally understood. For reasons of logic, practicality and
strategy, it is suggested that we:
Avoid the use of the term microevolution.
Rethink our use of the whole concept of variation within kind.
Avoid taxonomic definitions of the created kind in favour of one which is overtly axiomatic.
Most popular literature on evolution more or less implies that since we see small changes going on today in successive
generations of living things, we only have to extend this in time and we will see the types of changes which have caused
single-cell-to-man evolution. Creationists are thus seen as drawing some sort of imaginary Maginot line, and saying in
effect this much variation we will allow but no morecall it microevolution or variation within kind. When a creationist says
that, after all, mosquitoes are not seen turning into elephants or moths, this is regarded as a simplistic retreat. Such a
criticism is not without some justification, because the neo-Darwinist can rightly say that he would not expect to see that sort
of change in his lifetime either. The post-neo-Darwinist may say that our sample of geologic time is too small to be sure of
seeing a hopeful monster or any sort of significant saltational change.Another reason why the creationist position often
appears as one of weakness is that we are perceived as admitting variation only because of being forced to do so by
observation, then simply escaping the implications of variation by saying it does not go far enough. And we appear to redraw
our Maginot line depending on how much variation is demonstrated. It will be shown shortly, though, that this is a caricature
of the creationist position, and that the limits to variation arise from basic informational considerations at the genetic level.
The created kinds
Observed variation does appear to have limits. It is tempting to use this fact to show that there are created kinds, and that
variation is only within the limits of such kinds.However, the argument is circular and thus vulnerable. Since creationists by
definition regard all variation as within the limits of the created kind (see for example the statement of belief of the Creation
Research Society of the USA), how can we then use observations to prove that variation is within the limits of the kind? To
put it another wayof course we have never observed variation across the kind, since whatever two varieties descend
from a common source, they are regarded as the same kind. It is no wonder that evolutionists are keen to press us for an
exact definition of the created kind, since only then does our claim of variation is only within the kind become nontautologous and scientifically falsifiable.Circular reasoning does not invalidate the concept of created kinds, however. In the
same way, natural selection is also only capable of a circular definition (those who survive are the fittest, and the fittest are
the ones who survive), but it is nevertheless a logical, easily observable concept. All we are saying is that arguments which
are inherently circular cannot be invoked as independent proof of the kinds.When I claim that such independent proof may
not be possible by the very nature of things, this statement is in no way a cop out. For instance, let us say we happened
upon the remnants of an island which had exploded, leaving behind the debris of rocks, trees, sand, etc. It may be
impossible in principle to reconstruct the original positions of the pieces in relation to each other before the explosion. This
does not, however, mean that it is not possible to deduce with a great degree of confidence that the current state of the
debris is consistent with that sort of an explosion which was recorded for us by eyewitness testimony, rather than arising by
some other mechanism.In like manner, we can show that the observations of the living world are highly consistent with the ly
described concept of original created kinds, and inconsistent with the idea of evolution. This is best done by focusing on the
underlying genetic/informational basis of all biological change. This is more realistic and more revealing than focusing on the
degree or extent of morphological change.The issue is qualitative, not quantitative. It is not that the train has had insufficient
time to go far enoughit is heading in the wrong direction. The limits to variationobserved or unobservedwill come
about inevitably because gene pools run out of functionally efficient genetic information (or teleonomic information). A full
understanding of this eliminates the image of the desperately backpedalling creationist, redrawing his line of last resistance
depending on what new observations are made on the appearance of new varieties.It also defuses the whole issue of
micro and macro evolution. I believe it is better for creationists to avoid these confusing and misleading terms altogether.
The word evolution generally conveys the meaning of the sort of change which will ultimately be able to convert a
protozoon into a man or a reptile into a bird, and so on. I hope to show that in terms of that sort of meaning, we do not see
any evolution at all. By saying we accept micro but not macroevolution we risk reinforcing the perception that the issue is
about the amount of change, which it is not. It is about the type of change.This is not merely petty semantics, but of real
psychological and tactical significance. Of course one can say that microevolution occurs when this word is defined in a
certain fashion, but the impact of the word, the meaning it conveys, is such as to make it unwise to persevere with this
unnecessary concessional statement. Microevolution, that is, a change, no matter how small, which is unequivocally the
right sort of change to ultimately cause real, informationally uphill change, has never been observed.In any case, leading
biologists are themselves now coming to the conclusion that macroevolution is not just microevolution [using their

terminology] extended over time. In November 1980 a conference of some of the worlds leading evolutionary biologists,
billed as historic, was held at the Chicago Field Museum of Natural History on the topic of macroevolution. Reporting on
the conference in the journal Science, Roger Lewin wrote:The central question of the Chicago conference was whether the
mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of
doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No. 1Francisco
Ayala (Associate Professor of Genetics, University of California), was quoted as saying: but I am now convinced from
what the paleontologists say that small changes do not accumulate.2The fact that this article reaches essentially the same
conclusion in the following pages can thus hardly cause it to be regarded as radical. Nevertheless, the vast majority of even
well-educated people still persist in ignorance of this. That is, they believe that Big Change = Small Change x Millions of
Years.
The concept of information
The letters on this [printed] pagethat is, the matter making up the ink and paperall obey the laws of physics and
chemistry, but these laws are not responsible for the information they carry. Information may depend on matter for its
storage, transmission and retrieval, but is not a property of it. The ideas expressed in this article, for instance, originated in
mind and were imposed on the matter. Living things also carry tremendous volumes of information on their biological
moleculesagain, this information is not a property of their chemistry, not a part of matter and the physical laws per se. It
results from the orderfrom the way in which the letters of the cells genetic alphabet are arranged. This order has to be
imposed on these molecules from outside their own properties. Living things pass this information on from generation to
generation. The base sequences of the DNA molecule effectively spell out a genetic blue-print which determines the
ultimate properties of the organism. In the final analysis, inherited biological variations are expressions of the variations in
this information. Genes can be regarded as sentences of hereditary information written in the DNA language.Imagine now
the first population of living things on the evolutionists primitive earth. This so-called simple cell would, of course, have a
lot of genetic information, but vastly less than the information in only one of its present-day descendant gene pools, e.g.,
man. The evolutionist proposes that this telegram has given rise to encyclopedias of meaningful, useful genetic sentences.
(See later for discussion of meaning and usefulness in a biological sense.) Thus he must account for the origin with time of
these new and meaningful sentences. His only ultimate source for these is mutation.3Going back to the analogy of the
printed page, the information in a living creatures genes is copied during reproduction, analogous to the way in which an
automatic typewriter reproduces information over and over. A mutation is an accident, a mistake, a typing error. Although
most such changes are acknowledged to be harmful or meaningless, evolutionists propose that occasionally one is useful in
a particular environmental context and hence its possessor has a better chance of survival/reproduction. By looking now at
the informational basis for other mechanisms of biological variation, it will be seen why these are not the source of new
sentences and therefore why the evolutionist generally relies on mutation of one sort or another in his scheme of things.
1. Mendelian variation
This is the mechanism responsible for most of the new varieties which we see from breeding experiments and from
reasonable inferences in nature. Sexual reproduction allows packets of information to be combined in many different ways,
but will not produce any new packets or sentences. For example, when the many varieties of dog were bred from a
mongrel stock, this was achieved by selecting desired traits in successive generations, such that the genes or sentences
for these traits became isolated into certain lines. Although some of these sentences may have been hidden from view in
the original stock, they were already present in that population. (We are disregarding mutation for the moment, since such
new varieties may arise independently of any new mutations in the gene pool. Some dogs undoubtedly have mutant
characteristics.)This sort of variation can only occur if there is a storehouse of such sentences available to choose from.
Natural (or artificial) selection can explain the survival of the fittest but not the arrival of the fittest, which is the real question.
These Mendelian variations tell us nothing about how the genetic information in the present stock arose. Hence, it is not the
sort of change required to demonstrate upward evolutionthere has been no addition of new and useful sentences. And
this is in spite of the fact that it is possible to observe many new varieties in this wayeven new species. If you define a
species as a freely interbreeding natural unit, it is easy to see how new species could arise without any uphill change. That
is, without the addition of any new information coding for any new functional complexity. For example, mutation could
introduce a defect which served as a genetic barrier, or simple physical differences, such as the sizes of Great Dane and
Chihuahua, could make interbreeding impossible in nature.It is a little surprising to still see the occasional creationist
literature clinging to the concept that no new species have ever been observed. Even if this were true, and there is some
suggestion that it has actually been observed, there are instances of clines in field observations which make it virtually
certain that two now-isolated (reproductively) species have arisen from the same ancestral gene pool. Yet the very same
creationists who seem reluctant to make that sort of admission would be quite happy to agree with the rest of us that the
various species within what may be regarded as the dog kind, including perhaps wolves, foxes, jackals, coyotes and the
domestic dog, have arisen from a single ancestral kind. So why may this no longer be permitted to be happening under
present-day observations? It is not only scientifically unnecessary, but it sets up a straw man in the sense that any definite
observation of a new species arising is used as a further lever with which to criticize creationists.What we see in the process
of artificial selection or breeding giving rise to new varieties, is a thinning-out of the information in the parent stock, a
reduction in the genetic potential for further variation. If you try and breed a Chihuahua from a Great Dane population or vice
versa, you will find that your population lacks the necessary sentences. This is because, as each variety was selected out,
the genes it carried were not representative of the entire gene pool.What appeared to be a dramatic example of change with
the appearance of apparently new traits thus turns out, when its genetic basis is understood, to be an
overall downward movement in informational terms. The number of sentences carried by each subgroup is reduced thus
making it less likely to survive future environmental changes. Extrapolating that sort of process forward in time does not lead
to upwards evolution, but ultimately to extinction with the appearance of evermore-informationally-depleted populations.
2. Polyploidy
Again, no sentences appear which did not previously exist. This is the multiplication (photocopying) of information already
present.
3. Hybridizatlon
Again, no new sentences. This is the mingling of two sets of information already present.
4. Mutation
Since mutations are basically accidents, it is not surprising that they are observed to be largely harmful, lethal or
meaningless to the function or survival of an organism. Random changes in a highly ordered code introduce noise and
chaos, not meaning, function and complexity, which tend to be lost. However, it is conceivable that in a complex world,
occasionally a destructive change will have a limited usefulness. For example, if we knock out a sentence such that there is
a decrease in leg length in sheep (and there is such a mutation), this is useful to stop them jumping over the farmers fence.
A beetle on a lonely, wind-swept island may have a mutation which causes it to lose or corrupt the information coding for

wing manufacture; hence its wingless successors will not be so easily blown out to sea and will thus have a selective
advantage. Eyeless fish in caves, some cases of antibiotic resistancethe handful of cases of mutations which are quite
beneficialdo notinvolve the sort of increase in functional complexity which evolutionary theory demands. Nor would one
expect this to be possible from a random change.At this point some will argue that the terms useful, meaningful,
functional, etc. are misused. They claim that if some change gives survival value then by definition it has biological
meaning and usefulness. But this assumes that living systems do nothing but survivewhen in fact they and their
subsystems carry out projects and have specific functions. That is, they carry teleonomic information. This is one of the
essential differences between living objects and non-living ones (apart from machines). These projects do not always give
rise to survival/reproductive advantagesin fact, they may have very little to do with survival, but are carried out very
efficiently. The Darwinian assumption is always made, of course, that at some time in the organisms evolutionary history,
the project had survival/reproductive value. (For example, the archer-fish with its highly-skilled hobby of shooting down
bugs which it does not require for survival at the present time.) However, since these are nontestable assumptions, it is
legitimate to talk about genetic information in a teleonomic sense, in isolation from any possible survival value.The gene
pools of today carry vast quantities of information coding for the performance of projects and functions which do not exist in
the theoretical primeval cell. Hence, in order to support protozoon-to-man evolution, one must be able to point to instances
where mutation has added a new sentence or gene coding for a new project or function. This is so regardless of ones
assumptions on the survival value of any project or function.We do not know of a single mutation giving such an increase in
functional complexity. Probabilistic considerations would seem to preclude this in any case, or at least make it an
exceedingly rare event, far too rare to salvage evolution even over the assumed multibillion year time span.To illustrate
furtherthe molecule haemoglobin in man carries out its project of transporting and delivering oxygen in red cells in a
functionally efficient manner. A gene or sentence exists which codes for the production of haemoglobin. There is a known
mutation (actually three separate ones, giving the same result) in which only one letter in the sentence has been
accidentally replaced by another. If you inherit this change from both parents, you will be seriously ill with a disease called
sickle cell anaemia and will not survive for very long. Yet evolutionists frequently use this as an example of a beneficial
mutation. This is because if you inherit it from only one parent, your red cells will be affected, but not seriously enough to
affect your survivaljust enough to prevent the malaria parasite from using them as an effective host. Hence, you will be
more immune to malaria and better able to survive in malaria-infested areas. This shows us how a functionally
efficienthaemoglobin molecule became a functionally crippled haemoglobin molecule. The mutation-caused gene for this
disease is maintained at high levels in malaria-endemic regions by this incidental phenomenon of heterozygote superiority.
Its damaging effect in a proportion of offspring is balanced by the protection it gives against malaria. It is decidedly not an
upward change. We have not seen a new, efficient oxygen transport mechanism or its beginnings evolve. We have not
seen the haemoglobin transport mechanism improved.One more loose but possibly useful analogy. Let us say an
undercover agent is engaged in sending a daily reassuring telegram from enemy territory. The text says the enemy is not
attacking today. One day an accident occurs in transmission and the word not is lost. This is very likely going to be a
harmful change, perhaps even triggering a nuclear war by mistake. But perhaps, in a freak situation, it could turn out to be
useful (for example, by testing the fail-safe mechanisms involved). But this does not mean that it is the sort of change
required to begin to convert the telegram into an encyclopedia.The very small number of beneficial mutations actually
observed are simply the wrong kind of change for evolutionwe do not see the addition of new sentences which carry
meaning and information. Again surprisingly, one often reads creationist works which insist that there is no such thing as a
beneficial mutation. If benefit is defined purely in survival terms, then we would not expect this to be true in all instances,
and in fact it is notthat is, there are indeed beneficial mutations in that sense only.Information depends on order, and
since all of our observations and our understanding of entropy tells us that in a natural, spontaneous, unguided and
unprogrammed process order will decrease, the same will be true of information. The physicist and communications
engineer should not be surprised at the realisation that biological processes involve no increases in useful or functional
(teleonomic) information and complexity. In fact, the net result of any biological process involving transmission of information
(i.e., all hereditary variation) is conservation or loss of that genetic information.This points back directly to the creation of the
information, supernaturally, in the beginning. It is completely in harmony with the young age concept of a world made very
good as a balanced, functioning whole, with decay only subsequent to the Fall. This is the reason why there are inevitable
limits to variation, why the creationist does not have to worry about how many new species the future may bringbecause
there is a limit to the amount of functionally efficient genetic information present, and natural processes such as mutation
cannot add to this original storehouse.Notice that since organisms were created to migrate out from a central point at least
once and fill empty ecological niches, as well as having to cope with a decaying and changing environment, they would
require considerable variation potential. Without this built-in genetic flexibility, most populations would not be present today.
Hence the concept of biological change is in a sense predicted by the young age model, not something forced upon it only
because such change has occurred.
The created kind
The originally created information was not in the form of one super species from which all of todays populations have split
off by this thinning out process, but was created as a number of distinct gene pools. Each group of sexually reproducing
organisms had at least two members. Thus,Each original group began with a built-in amount of genetic information which is
the raw material for virtually all subsequent useful variation.Each original group was presumably genetically and
reproductively isolated from other such groups, yet was able to interbreed within its own group. Hence the original kinds
would truly have earned the modern biological definition of species. 4 We saw in our dog example that such species can
split into two or more distinct subgroups which can then diverge (without adding anything new) and can end up with the
characteristics of species themselvesthat is, reproductively isolated from each other but freely interbreeding among
themselves. The more variability in the original gene pool, the more easily can such new groups arise. However, each
splitting reduces the potential for further change and hence even this is limited. All the descendants of such an original kind
which was once a species, may then end up being classified together in a much higher taxonomic categorye.g., family.

Take a hypothetical created kind Atruly a biological species with perhaps a tremendous genetic potential. See Figure 1.
Note that A may even continue as an unchanged group, as may any of the subgroups. Splitting off of daughter populations
does not necessarily mean extinction of the parent population. In the case of man, the original group has not diverged
sufficiently to produce new species.Hence, D1, D2, D3, E1, E2, E3, P1, P2, Q1, Q2, Q3 and Q4 are all different species,
reproductively isolated. But all the functionally efficient genetic information they contain was present in A. (They presumably
carry some mutational defects as well).Let us assume that the original kind A has become extinct, and also the populations
X, B, C, D, E, P and Q. (But not D1, D2, etc.) If X carried some of the original information in A, which is not represented in B
or C, then that information is lost forever. Hence, in spite of
the fact that there are many new species which were not
originally present, we would have witnessed conservation
of most of the information, loss of some, and nothing new
added apart from mutations (harmful defects or just
meaningless noise in the genetic information). All of which
is the wrong sort of informational change if one is trying to
demonstrate protozoon-to-man evolution.Classifications
above species are more or less arbitrary groupings of
convenience, based generally on similarities and
differences of structure. It is conceivable that today, D1, D2
and D3 could be classified as species belonging to one
genus, and E1, E2 and E3 as species in another genus, for
example. It could also be that the groups B and C were
sufficiently different such that their descendants would
today be in different families. We begin to see some of the
Figure 1. The splitting off of daughter populations from
problems facing a creationist who tries to delineate todays
an original created kind.
representatives of the created kinds.Creatures may be
Click here for larger image.
classified in the same family, for example, on the basis of
similarities due to common design while in fact they belong
to two totally different created kinds. This should sound a note of caution against using morphology alone, as well as
pointing out the potential folly of saying in this case, the baramin is the family; in this case, it is the genus, etc. (Baramin is
an accepted creationist term for created kind.)There is no easy solution as yet to the problem of establishing each of these
genetic relationshipsin fact, we will probably never be able to know them all with certainty. Interbreeding, in vitro
fertilization experiments, etc. may suggest membership of the same baramin but lack of such genetic compatibility does not
prove that two groups are not in the same kind. (See earlier discussiongenetic barriers could arise via mutational
deterioration.) However, newer insights, enabling us to make direct comparisons between species via DNA sequencing,
open up an entirely new research horizon. (Although the question of where the funding for such extensive research will
come from in an evolution-dominated society remains enigmatic.)What then do we say to an evolutionist who
understandably presses us for a definition of a created kind or identification of same today? I suggest the following for
consideration:Groups of living organisms belong in the same created kind if they have descended from the same ancestral
gene pool.To talk of fixity of kinds in relation to any present-day variants thus also becomes redundantno new kinds can
appear by definition.Besides being a simple and obvious definition, it is axiomatic. Thus it is as unashamedly circular as a
rolled-up armadillo and just as impregnable, deflecting attention, quite properly, to the real issue of genetic change.The
question is notwhat is a baramin, is it a species, a family or a genus? Rather, the question iswhich of todays
populations are related to each other by this form of common descent, and are thus of the same created kind? Notice that
this is vastly removed from the evolutionists notion of common descent. As the creationist looks back in time along a line of
descent, he sees an expansion of the gene pool. As the evolutionist does likewise, he sees a contraction.
As with all taxonomic questions, common sense will probably continue to play the greatest part. For instance, it is
conceivable (though not necessarily so) that crocodiles and alligators both descended from the same ancestral gene pool
which contained all their functionally efficient genes, but not really conceivable that crocodiles, alligators and ostriches had a
common ancestral pool which carried the genes for all three!
ARE THERE EVOLUTIONARY PROCESSES THAT LEAD TO INFORMTION INCREAS?
Bears across the world
Bears are some of the most amazing creatures!
by Paula Weston and Carl Wieland
From the thick stomach lining of the panda and the partially webbed paws of the
polar bear, to the insect-sucking muzzle of the sloth bear, bears provide a
fascinating example of the variety of specialized characteristics existing within one
family.The bear family (Ursidae) consists of eight species, four of which are
contained in the Ursus group: the brown bear, American black bear, Asiatic black
bear and polar bear. Even within this group (known as a genus) the variation is
wide.The brown and American black bears are mainly vegetarians with appropriate
dental features for crushing plant material. However, the first has claws suited to
digging while the other has claws more suitable for climbing. The Asiatic black bear,
which also has claws for climbing, is an opportunistic omnivorous feeder (eating
meat and plants as available).1The polar bear, however, has some amazing features
which allow it to function perfectly in its cold, wet environment. Much heavier than
the above bears, it has two distinct hair types, one long and one short, which effectively is like having two coats. By
increasing buoyancy, this helps it to swim, as does its long neck and the partial webbing between its toes. Its fur-covered
foot pads provide better traction on the ice. Almost exclusively a meat eater (with teeth to suit such a diet), the polar bear
also has a large stomach capacity for sporadic (opportunistic) feeding.The sun bears and sloth bears (also included in
the Ursus group by many scientists) also have as many differences as similarities. The sun bear is omnivorous, with sharp,
sickle-like claws suited for tree climbing, while the sloth bear (possessing claws for both digging and tree climbing) has an
unusual head and dental structure perfect for eating its main food source, termites. The sloth bears long muzzle has
protrusible lips and nostrils which it can closethese two features allow it to create a vacuum tube to suck up the termites.

The giant panda, like the polar bear, has very specialized
features necessary for survival, including powerful jaws and
special molars for crushing plants, and an oesophagus (gullet)
with a tough, horny lining to protect the bear from splinters when
it eats bamboo, its primary source of food. The pandas stomach
also has a thick, muscular lining to protect it from bamboo
fragments.While both evolutionists and creationists consider
these specialized characteristics to be adaptations to the
environment through natural selection, the two camps are poles
apart as to how most of this variation came about in the first
place.Evolutionists believe that the genetic (hereditary)
information (which supplies the recipe to construct such
specialized features in the developing embryo) all arose by an
accumulation of copying errors (mutations). Any good errors
which helped the creature to survive were passed on. In this way,
they believe that these design features are all the result of these
copying mistakes, accumulated by selection over millions of
years.Creationists, however, while accepting that all of todays
bears probably descended from a single bear kind, 2 do not believe that the information in the recipes for all these design
features arose by chance. No-one has ever observed any biological process adding information!A better explanation is that
virtually all the necessary information was already there in the genetic makeup of the first bears, a population created with
vast genetic potential for variation.This doesnt mean that all of the features of todays bears would have been on obvious
display back then. A simple example would be the way in which mongrel dogs obviously had the potential to develop all the
different breeds we see today. Thus, there was no actual poodle to be seen among mongrel dogs hundreds of years ago,
but by looking closely at many of them, one would have seen at least some of the individual features found in todays
poodles popping up here and there.Similarly, it is unlikely that there were polar bears before the Floodhowever, since
much of the information for their specialized features was already there, some of these features, in lesser form, would have
also been apparent in a few individuals from time to time.It takes selection (natural or artificial) to concentrate and enhance
these featureshowever, this does not create anything really new, no new design information. If there were no genetic
potential in the bear family to grow really thick fur, then no bears would ever have inhabited the Arctic.However, it is likely
that not all the features for todays bears would have been coded for directly in the genes of the original bear kind.
Mutations, genetic copying mistakes which cause defects, may on rare occasions be helpful, even though they are still
defects, corruptions or losses of information. Thus, the polar bears partly webbed feet may have come from a mutation
which prevented the toes from dividing properly during its embryonic development. This defect would give it an advantage in
swimming, which would make it easier to survive as a hunter of seals among ice floes.Thus, bears carrying this defect would
be more likely to pass it on to their offspringbut only in that environment. However, since mutations are always
informationally downhill, there is a limit to the ability of this mechanism to cause adaptive features to arise. It will never turn
fur into feathers, for example. 3After the Flood, when dramatic climate and environment changes occurred, there was
suddenly a large number of empty niches, and as the first pair multiplied, groups of their descendants found new habitats.
Only those whose predominant characteristics were suitable for that environment thrived and bred. 4 In this way, it would not
need millions of years for a new variety (even a new species) to arise.For example, of the first bears forced to exist on
bamboo, only those exhibiting the genetic information for a stronger oesophagus and stomach lining would have survived in
each generation. Animals without these features would not have lived to produce offspring, thus reducing the gene pool as
only the surviving animals interbred. Thus these characteristics became more prominent in that group. This is more
reasonable than assuming that this group had to wait for the right mutations to come along, over thousands or millions of
years, to provide those vital features.
Notice how such new species will
be more specialised;
be better adapted to a particular habitat; and
have less genetic information than the original group.
(See the box (below) for a simple example of how information is lost as creatures adapt).
It makes a great deal of sense the original kinds of creatures to be created as very robust groups, possessing the ability to
vary and adapt to changing environments.
Summary
Creationists accept that the design features we see in modern animals are largely the result of original created design,
expressed and fine-tuned to fit the environment by subsequent adaptation, through natural selection in a fallen world of
death and struggle. If, as seems probable from fossil evidence, there were no ice-caps before the Flood, there would have
been no polar bears at that time. The wisdom of the designer is revealed in providing the original organisms with the
potential to adapt so as to be fit for a wide range of habitats and lifestyles.
The bear family, with its incredible variation, provides clear evidence of an intelligent designer.

How information is lost when creatures adapt to their


environment
In the example on the right (simplified for illustration), a single gene
pair is shown under each bear as coming in two possible forms.
One form of the gene (L) carries instructions for long fur, the other
(S) for short fur.In row 1, we start with medium-furred animals (LS)
interbreeding. Each of the offspring of these bears can get one of
either gene from each parent to
make up their two genes.
In row 2, we see that the
resultant offspring can have
either short (SS), medium (LS)
or long (LL) fur. Now imagine
the climate cooling drastically
(as in the post-Flood ice age).
Only those with long fur survive
to give rise to the next
generation (line 3). So from then
on, all the bears will be a new,
long-furred variety. Note that:
They are now adapted to their
environment.
They
are
now
more specialized than their ancestors on row 1.
This has occurred through natural selection.
There have been no new genes added
In fact, genes have been lost from the populationi.e. there has
been a loss of genetic information, the opposite of what microbe-toman evolution needs in order to be credible.
Now the population is less able to adapt to future environmental changeswere the climate to become hot, there is no
genetic information for short fur, so the bears would probably overheat.
Polar bears: correcting past blunders
In 1979, this magazine, then called Ex Nihilo, reported (2(2):18) that the hairs of polar bears were transparent and, like fibreoptic cables, piped light energy down to the bears skin to keep it warm. The information was from a secular source, and of
course we had no polar bear hairs to test.Now a recent author who has tested their hair points out that this idea, which has
been repeated over and over in secular science journals and reports, is actually a myth. 6The polar bears hairs are not some
unique fibre-optic substance (which come to think of it would actually have been tough to explain if all bears came recently
from one kind, as most creationists currently think), but are made of ordinary keratin, just like the hair of all other mammals.
This emphasizes the fact that all scientific claims are tentative and fallible, no matter who makes them.Another erroneous
statement about polar bears, which has appeared in some anti-Darwinian literature, is that natural selection can have nothing
to do with the polar bears white coat, since the bear has no predators.However, this is not the case, as it is obvious that of the
first bears to reach the snowbound regions, those with lighter coats would have had an advantage.By being camouflaged
against the snow, they would have had more chance of being able to sneak up on their prey undetected. Thus, especially
where food was scarce, whiter bears would have been more likely to survive and pass on their genes.

Was Dawkins Stumped?


Frog to a Prince critics refuted again
Published: 12 April 2008(GMT+10)
This week we feature a critical feedback from JW, whose complaint relates to
CMIs videoclip of Richard Dawkins being stumped by a question about
genetic information, which features on our DVD From a Frog to a Prince (see
raw footage with subtitles, right).We like to publish critical feedbacks regularly,
and our policy is to choose the best-articulated and most well-reasoned critical
feedback available. But regrettably, few of the multitudinous critical emails we
receive are cogent and logical, with most tending toward the mangled diatribe
end of the literary spectrum. JWs email below is mildly representative in this
regard. Although JWs contribution breaks our feedback rules against
unsubstantiated allegations, etc., her comments give us the opportunity to
confront the prolific accusations made against us in relation to our Richard
Dawkins interview, and also to address some common misthinking regarding
religion. We have answered previous critics in Skeptics choke on Frog: Was
Dawkins caught on the hop? But we still receive a lot of abuse and slander in
relation to our Dawkins interview, much of it revolving around incorrect
accounts of the sequence of events that occurred during the interview. For a
precise analysis and timeline of exactly what took place, see our Dawkins
Interview Timeline (below).
J.W. writes:
sxc.hu
Ive seen the raw footage of the so called stumping Richard
Hawkins [sic] fiasco. You are liars! You didnt stump him. He gave a brilliant
and true answer!!! Must you really lie to try to satisfy your followers???
Andrew Lamb responds:
Ive seen the raw footage of the so called stumping Richard Hawkins fiasco.

An excerpt of CMIs raw footage of our Richard Dawkins interview was posted online in April 2007 on a secular website. The
excerpt is that in which Richard Dawkins responds to the question, Can you give an example of a genetic mutation or an
evolutionary process which can be seen to increase the information in the genome?, a question that he was asked on two
separate occasions on the day.Encouragingly, in the year that has passed since then there have been almost half a million
viewings of this video, by people from around the world. Thats several hundred thousand people who have witnessed for
themselves the utter inability of evolutions leading apologist to account for genetic information.A shortened version of this
appears in our popular Frog to a Prince DVD, which incidentally now has subtitles in ten languages.
You are liars! You didnt stump him. He gave a brilliant and true answer!!!
We did not lie. The Richard Dawkins Stumped title given to our raw footage clip on that secular website is accurate.
Dawkins was stumped, as shown by the fact that he tried to think of an answer, but eventually responded with comments
that did not address the question.A few of the things Dawkins said were true, e.g. fish are modern animals. But even then,
they dont qualify as true answers since they hadnothing to do with the question asked.Also, much of what he said was not
true, e.g. his comment that They [fish] are descended from ancestors which were descended from. From the true eyewitness account of history we know that humans have always been humans, and did not descend from some other kind of
creature, and there are no facts of science to demonstrate that they have, only the fanciful story-telling of evolution theorists.
Must you really lie to try to satisfy your followers???
Your comment here implies that you think there is something wrong with lying. But if evolution were true, it would not be
possible to show logically that lying is bad. Rather good and bad would just be matters of opinion, not matters of objective
reality. Evolutionary beliefs provide no objective basis to justify traits like honesty. Dawkins Interview Timeline

The above timeline is of the Richard Dawkins interview that formed the basis of CMIs video From a Frog to a Prince (click
on it to see high-resolution version).
From a Frog to a Prince recording timeline resolves questions
This timeline, based on the main camera sound track of the interview, reconciles the three accounts of the interview, i.e. the
published accounts of Richard Dawkins and Gillian Brown, and the unpublished account of Philip Hohnen, given in personal
correspondence with CMI during 20012003. There seemed to be discrepancies between the three accounts, but our
timeline is consistent with all three accounts and with the audio tape. The key to resolving the apparent inconsistencies is
the realization that:
Dawkins was questioned about information twice, first by Hohnen (A on timeline), after which the interview was interrupted,
with Dawkins upset, and later by Brown (K), from behind the camera, when Dawkins had no ready answer.Dawkins anger
erupted at the first occasion, when he suspected he might be speaking to creationists. This is what Dawkins recalled and
gave as an excuse for his silence following the question on the video, which was asked some time later when Dawkins was
already aware that he was speaking with creationists. In his recollection, Professor Dawkins conflated these two
events.After Philip Hohnen had been on a tour of the house with Mrs Dawkins (Lalla Ward) (section D on timeline), and then
negotiated with Richard Dawkins (E), the latter agreed to make a statement for recording. In his statement (G, J) Dawkins
candidly admitted that evolution had to explain the information in living things and he claimed that mutations, aided by
natural selection, created all the information. These very pro-evolution statements are on the video, just as Dawkins had
wanted. After these confident assertions, Gillian Brown, from her position behind the camera, slipped in the question asking
for an actual example of an evolutionary process that can be observed to increase the information in the genome (K). It
would have been churlish of Dawkins not to try to answer this, in the light of the confident spiel he had just given. His look
(on the video) of puzzlement, even consternation, had nothing to
do with discovering the nature of the interview (this discovery
happened much earlier). The fact that he failed to answer the
question, even given time to think, should have been sufficient for
any fair-minded observer to see that the silence (L) following the
asking of the question revealed a lack of an answer, not a rising
tide of anger, etc., as claimed by Dawkins.There was a period
(DE on the timeline) which was perceived differently by the
three participants, in part because they were actually doing
different things at the time (e.g. Philip Hohnen was being given a
guided tour of the house by Mrs Dawkins). When Hohnen
returned from the tour, he did not see any evidence of a
rapprochement between Dawkins and Brown. Hohnen then
negotiated with Dawkins for a continuation of the videoing, with
Dawkins agreeing to give a statement.
This timeline harmonizes the recollections of all three persons
and shows that the video producer did not manufacture Dawkins
silence and nor was Dawkins silence due to a rising tide of anger over discovering that he was being interviewed by
creationists (this had happened earlier). Hohnen recalls that they parted in good humour. The segment where Dawkins fails

to answer the information question is fair (in fact the period of silent puzzlement was considerably shortened on the Frog to
a Prince video).It may be argued that Brown pushed the boundaries by asking the question at all when she had agreed for
Dawkins to make a statement. However, it was a question begging to be asked after Dawkins confident speech about the
adequacies of natural processes in creating new information.
Philip Hohnen has checked the timeline, and vouches for its accuracy.
Explanatory notes to timeline
Creation Ministries International have an audio tape (we may also have the actual video recording, but it is not currently
locatable) of the latter part of the interview, starting from the point at which Dawkins expressed his suspicions (B). This audio
tape comprises the sound track from the main video camera. (Another video camera was also running during much of the
interview.) A copy of this same audio tape was sent to the anticreationist Glenn Morton, who had previously been sceptical
of our account. After seeing this copy, Morton declared I will state categorically that the audio tape of the interview 100%
supports Gillian Browns contention that Dawkins couldnt answer the question.The green (lightly shaded) segments of the
timeline above represent periods covered on the audio tape. The red (darkly shaded) segments represent periods not
covered on the audio tape.The two occurrences of double slashes // in the timelines text boxes represent breaks in
recording.Dawkins oft-discussed 11-second pause is represented in this timeline by segments L and M, and is referred to
on this chart by the term silence, rather than the usual term pause in order to differentiate between this and the recording
pauses. It is 11 seconds from the end of GBs question until RDs audible intake of breath, and 19 seconds in total from the
end of GBs question until the pause in recording (O). That is, L, M and N together comprise 19 seconds. There is
approximately seven seconds of silence between RDs audible intake of breath and his request to stop.
In period O on this chart, i.e. after GB asked the info question and RD requested a stop, there was no speaking by anybody
until GB said Now recording and RD began speaking again with Ok. Theres a popular misunderstanding .
The adaptation of bacteria to feeding on nylon waste
by Don Batten
by Don Batten
In 1975, Japanese scientists discovered bacteria that could live on the waste products of nylon manufacture as their only
source of carbon and nitrogen.1 Two species, Flavobacterium sp. K172 and Pseudomonas sp. NK87, were identified that
degrade nylon compounds.Much research has flowed from this discovery to elucidate the mechanism for the apparently
novel ability of these bacteria.2 Three enzymes are involved in Flavobacterium K172: F-EI, F-EII and F-EIII, and two
in PseudomonasNK87: P-EI and P-EII. None of these has been found to have any catalytic activity towards naturally
occurring amide compounds, suggesting that the enzymes are completely new, not just modified existing enzymes. Indeed
no homology has been found with known enzymes. The genes for these enzymes are located on plasmids: 3 plasmid pOAD2
in Flavobacterium and on two plasmids, pNAD2 and pNAD6, in Pseudomonas.Apologists for materialism latched onto these
findings as an example of evolution of new information by random mutations and natural selection, for example, Thwaites in
1985.4 Thwaites’ claims have been repeated by many, without updating or critical evaluation, since.
Is the evidence consistent with random mutations generating the new genes?
Thwaites claimed that the new enzyme arose through a frame shift mutation. He based this on a research paper published
the previous year where this was suggested. 5 If this were the case, the production of an enzyme would indeed be a
fortuitous result, attributable to pure chance. However, there are good reasons to doubt the claim that this is an example of
random mutations and natural selection generating new enzymes, quite aside from the extreme improbability of such
coming about by chance.6Evidence against the evolutionary explanation includes:There are five transposable elements on
the pOAD2 plasmid. When activated, transposase enzymes coded therein cause genetic recombination. Externally imposed
stress such as high temperature, exposure to a poison, or starvation can activate transposases. The presence of the
transposases in such numbers on the plasmid suggests that the plasmid is designed to adapt when the bacterium is under
stress.All five transposable elements are identical, with 764 base pairs (bp) each. This comprises over eight percent of the
plasmid. How could random mutations produce three new catalytic/degradative genes (coding for EI, EII and EIII) without at
least some changes being made to the transposable elements? Negoro speculated that the transposable elements must
have been a late addition to the plasmids to not have changed. But there is no evidence for this, other than the circular
reasoning that supposedly random mutations generated the three enzymes and so they would have changed the
transposase genes if they had been in the plasmid all along. Furthermore, the adaptation to nylon digestion does not take
very long (see point 5 below), so the addition of the transposable elements afterwards cannot be seriously entertained.All
three types of nylon degrading genes appear on plasmids and only on plasmids. None appear on the main bacterial
chromosomes of either Flavobacterium or Pseudomonas. This does not look like some random origin of these
genes—the chance of this happening is low. If the genome of Flavobacterium is about two million bp,7 and the pOAD2
plasmid comprises 45,519 bp, and if there were say 5 pOAD2 plasmids per cell (~10% of the total chromosomal DNA), then
the chance of getting all three of the genes on the pOAD2 plasmid would be about 0.0015. If we add the probability of the
nylon degrading genes of Pseudomonas also only being on plasmids, the probability falls to 2.3 x 10 -6. If the enzymes
developed in the independent laboratory-controlled adaptation experiments (see point 5, below) also resulted in enzyme
activity on plasmids (almost certainly, but not yet determined), then attributing the development of the adaptive enzymes
purely to chance mutations becomes even more implausible.The antisense DNA strand of the four nylon genes investigated
in Flavobacterium and Pseudomonas lacks any stop codons.8 This is most remarkable in a total of 1,535 bases. The
probability of this happening by chance in all four antisense sequences is about 1 in 10 12. Furthermore, the EII gene
in Pseudomonas is clearly not phylogenetically related to the EII genes of Flavobacterium, so the lack of stop codons in the
antisense strands of all genes cannot be due to any commonality in the genes themselves (or in their ancestry). Also, the
wild-type pOAD2 plasmid is not necessary for the normal growth of Flavobacterium, so functionality in the wild-type parent
DNA sequences would appear not to be a factor in keeping the reading frames open in the genes themselves, let alone the
antisense strands.
Some statements by Yomo et al., express their consternation:
These results imply that there may be some unknown mechanism behind the evolution of these genes for nylon oligomerdegrading enzymes.
The presence of a long NSF (non-stop frame) in the antisense strand seems to be a rare case, but it may be due to the
unusual characteristics of the genes or plasmids for nylon oligomer degradation.
Accordingly, the actual existence of these NSFs leads us to speculate that some special mechanism exists in the regions of
these genes.
It looks like recombination of codons (base pair triplets), not single base pairs, has occurred between the start and stop
codons for each sequence. This would be about the simplest way that the antisense strand could be protected from stop
codon generation. The mechanism for such a recombination is unknown, but it is highly likely that the transposase genes

are involved.Interestingly, Yomo et al. also show that it is highly unlikely that any of these genes arose through a frame shift
mutation, because such mutations (forward or reverse) would have generated lots of stop codons. This nullifies the claim of
Thwaites that a functional gene arose from a purely random process (an accident).The Japanese researchers demonstrated
that nylon degrading ability can be obtained de novo in laboratory cultures of Pseudomonas aeruginosa [strain] POA, which
initially had no enzymes capable of degrading nylon oligomers.9 This was achieved in a mere nine days! The rapidity of this
adaptation suggests a special mechanism for such adaptation, not something as haphazard as random mutations and
selection.The researchers have not been able to ascertain any putative ancestral gene to the nylon-degrading genes. They
represent a new gene family. This seems to rule out gene duplications as a source of the raw material for the new genes.8
P. aeruginosa is renowned for its ability to adapt to unusual food sources—such as toluene, naphthalene, camphor,
salicylates and alkanes. These abilities reside on plasmids known as TOL, NAH, CAM, SAL and OCT
respectively.2Significantly, they do not reside on the chromosome (many examples of antibiotic resistance also reside on
plasmids).The chromosome of P. aeruginosa has 6.3 million base pairs, which makes it one of the largest bacterial genomes
sequenced. Being a large genome means that only a relatively low mutation rate can be tolerated within the actual
chromosome, otherwise error catastrophe would result. There is no way that normal mutations in the chromosome could
generate a new enzyme in nine days and hypermutation of the chromosome itself would result in non-viable bacteria.
Plasmids seem to be adaptive elements designed to make bacteria capable of adaptation to new situations while
maintaining the integrity of the main chromosome.
Stasis in bacteria
P. aeruginosa was first named by Schroeter in 1872. 10 It still has the same features that identify it as such. So, despite being
so ubiquitous, so prolific and so rapidly adaptable, this bacterium has not evolved into a different type of bacterium. Note
that the number of bacterial generations possible in over 130 years is hugeequivalent to tens of millions of years of human
generations, encompassing the origin of the putative common ancestor of ape and man, according to the evolutionary story,
indeed perhaps even all primates. And yet the bacterium shows no evidence of directional changestasis rules, not
progressive evolution. This alone should cast doubt on the evolutionary paradigm.Flavobacterium was first named in 1889
and it likewise still has the same characteristics as originally described.It seems clear that plasmids are designed features of
bacteria that enable adaptation to new food sources or the degradation of toxins. The details of just how they do this
remains to be elucidated. The results so far clearly suggest that these adaptations did not come about by chance mutations,
but by some designed mechanism. This mechanism might be analogous to the way that vertebrates rapidly generate novel
effective antibodies with hypermutation in B-cell maturation, which does not lend credibility to the grand scheme of neoDarwinian evolution.11 Further research will, I expect, show that there is a sophisticated, irreducibly complex, molecular
system involved in plasmid-based adaptationthe evidence strongly suggests that such a system exists. This system will
once again, as the black box becomes illuminated, speak of intelligent creation, not chance. Understanding this adaptation
system could well lead to a breakthrough in disease control, because specific inhibitors of the adaptation machinery could
protect antibiotics from the development of plasmid-based resistance in the target pathogenic microbes.
New plant coloursis this new information?
CMI scientist answers a skeptic
11 July 2000
One skeptic believes that he has found an example of new information arising by mutations and natural selection. Could he
be correct?
Question/statements from skeptic
Since I have some background in genetics and plant breeding, I can tell you that the entire field of plant breeding is based
on new information arising from random mutations. New traits do appear, at the molecular and morphological level new
proteins, new pigments, etc. These are novelties.Two parents with blue eyes will generally produce children with blue eyes,
and likewise two plants with white flowers will generally produce new plants with white flowers, but sometimes that seedlings
with red or purple flower turns up, not because a recessive allele has been revealed, but because a mutation has altered an
existing pigment or biochemical pathway to produce something entirely new, that has never existed before. This is NEW
INFORMATION.As an example, there is nothing like an ear of corn in any other species of grass. It seems to be entirely
unique in the plant kingdom. And yet there are three or four species of grass, very similar to corn in their overall growth, but
with typical grass-like reproductive organs. The funny thing is, they will breed with corn to produce fully fertile offspring. It is
clear that a combination of mutation and selection has produced in corn an unusual and entirely novel structure from a very
typical grass in other words, NEW INFORMATION.
Response by Don Batten, Ph.D.
The question comes from someone who does not understand the concept of information. The appearance of a new trait
does not have to involve the addition of information via the DNA coding. In fact, as bioinformatics expert Dr Lee Spetner has
demonstrated (in his book, Not by Chance, Judaica Press), such is so unlikely that it could never be the basis for the
increased information needed for molecules-to-man evolution. Information content is measured not by the number of traits,
but by what is called the specified complexity of a base sequence or protein amino acid sequence. A mutation, being a
random change in highly specified information contained in the nucleic acid base sequence, could almost never do anything
but scramble the information; that is, reduce the information.Now sometimes such a loss of information results in a new trait
for example, purple or red flowers where there were only blue ones before. This would have to be studied at the DNA
base sequence level (or amino acid sequence in the enzyme producing the pigment, or the pigment itself) to show this. For
example, a blue pigment could be changed into a red or purple pigment by loss of a side-chain from the basic pigment
molecule. Such a change would involve a loss of specified complexity and therefore a loss of information. Even an
informationally neutral change could be responsiblethis is not to be confused with Kimuras neutral mutation, which has
nothing to do with the concept of information, only the effect on survival. Even a change of one amino acid in a protein, not
altering information content, can alter energy levels in such a way as to change the visible absorption spectrum, e.g. by
reducing the number of consecutive conjugated bonds. And a small change in pH can have a large effect on color (this
effect was overlooked by a group of molecular biologists who managed to get the gene for the blue pigment in hydrangeas
into a rosethe rose was not blue, although the pigment was manufactured, because the cell pH was not the same as a
hydrangeas!).Of the many hundreds of antibiotic, herbicide and insecticide resistance mechanisms studied at a biochemical
level, none involve addition of specified complexity in the DNA. Although some are new traits due to mutations, all involve
loss of information. An example is the loss of control over the production of an enzyme that happens to break down penicillin
in Staphylococcus aureus, resulting in the production of greatly increased amounts of the enzyme and thus conferring
resistance to penicillin. Another mode of antibiotic resistance due to mutation is decreased effectiveness of a membrane
transport protein so that the antibiotic is no longer taken up by the cell (but the normal function of the transporter is also

impaired and the bacterium is less fit to survive in the wild). However, much antibiotic resistance seems to be acquired by
the transfer of plasmids from other species of bacteria via conjugation, which of course does not explain the ultimate origin
of the information.What about the corn story? The questioner is probably correct about the species of grass and the origin of
corn. I have no problem with that. Creationists would say that the species that interbreed with corn (maize) are of the same
created kind (see Ligers and wholphins? What next?, Q&A: Speciation). However, until the biochemical/genetic basis of the
difference between maize and its wild relatives is determined, it cannot be said that the maize inflorescence is due to new
information. Loss of information in some base sequences responsible for early steps in inflorescence development could
easily account for such seemingly large differences.It must be noted (again) that creationists do not say that mutations are
always harmful, just that they are almost invariably a loss of information (i.e. specified complexity). Sometimes a loss of
information can be beneficial, but it is a loss of information. For example, loss of function of wings in the flightless cormorant
in the Galpagos Islands, which can now dive better than its flying cousins, or flightless beetles on a windswept island that
are better off because they are less likely to be blown into the seasee Beetle bloopers.Evolution needs swags of new
information, if a microbe really did change into a man over several billion years. The additional new information would take
nearly a thousand books of 500 pages each to print the sequence. Random changes cannot account for a page, or even a
sentence, of this, let alone accounting for all of it. The evolutionist has an incredible faith!
Further reading: In the Beginning Was Information by Dr Werner Gitt (an information scientist in
Germany). The Mystery of Lifes Origin by Thaxton, Bradley and Olsenthese are
thermodynamics experts and they deal with the origin of information from a thermodynamics
point of view, showing the impossibility of natural processes creating the information in living
things. See also Q&A: Information Theory.
Is antibiotic resistance really due to increase in information?
22 October 2001; reposted and updated 11 November 2006
In 2001, the responses by Dr Jonathan Sarfati to the PBS Evolution propaganda series induced
mainly favorable responses [and were later incorporated into the book Refuting Evolution 2,
right].
Order
online
This feedback, from Mikko Ilmari N. of Finland, criticises the responses to the PBS series on
Online
chapter
index
ostensibly scientific grounds. He accuses CMI of bias, falsehood, and misinformation, but fails
to back up his points.The only issue he does attempt to back up is a claim of information
increase that caused increased resistance to antibiotics. But this once again fails to understand
the key relationship between information and specified complexity. Once again, supposed evidence for evolution turns out to
be better explained by the Creation/Fall model. His letter is printed with point-by-point responses by Dr Jonathan Sarfati (the
author of the PBS responses) interspersed as per normal email fashion. MINs letter includes quotes from the PBS rebuttal,
which are double-indented. Ellipses () at the end of one of the paragraphs signal that a mid-sentence comment
follows, not an omission.
Just noticed that your ministries have, by assistance of Australian creationist Jonathan Sarfati, responded to the PBS-TV
seriesEvolution.
The Australian creationist Jonathan Sarfati was not just offering assistance; it was part of his (my) job, since Im part
of Creation Ministries International.
The responses have got multiple omissions and scientific errors,
Really? Lets see if that claim stands up to scrutiny.
No special problems. I just noticed that [CMI] has provided misinformation in its PBS-rebuttals, and I suspect that reason
for doing so is [CMIs] fundamentalist-Christian bias, strictly requiring separately created species, young earth and many
other features with which, [CMIs] personnel sure is also familiar with.
As covered above, CMI staff are not the only ones with biases. Its just a convenient excuse to avoid having to actually
refute the scientific evidence for a young Earth. Note also, as shown in Q&A: Speciation, we do not believe that every one of
todays species was separately created. Rather we predict rapid speciation within a created kind, not requiring
any new genetic information but instead recombinations of already existing information and information-losing mutations.
Ill take Sarfatis writings of poison newts as an example: (from [this page], actually)
Poison newt
The program moves to Oregon, where there were mysterious deaths of campers, with newts found in their sleeping bags. It
turns out that these Rough-skinned Newts (Taricha granulosa) secrete a deadly toxin from their skin glands, so powerful that
even a pinhead can kill an adult human. They are the deadliest salamanders on Earth. So scientists investigated why this
newt should have such a deadly toxin.
Up to this point, still OK.
They theorized that a predator was driving this evolution, and they found that the Common Garter Snake (Thamnophis
sirtalis) was the newts only predator. Most snakes will be killed, but the Common Garter Snake just loses muscle control for
a few hours, which could of course have serious consequences.
Here the Evolution-program makes a good point how scientific research of evolution can be done, with satisfying results,
indeed.
And as I pointed out, the assumption of goo-to-you evolution was unnecessary and in fact is irrelevantthis is perfectly well
explained by the Creation/Fall model. Note that the creationist Edward Blyth talked about natural selection 25 years before
Darwin wrote Origin.
But the newts were also driving the evolution of the snakes-they also had various degrees of resistance to the newt toxin.
Are their conclusions correct? Yes, they are probably correct that the predators and prey are driving each others changes,
and that they are the result of mutations and natural selection. Although this might surprise the ill-informed anti-creationist,
this shouldnt be so surprising to anyone who understands the young age model.
Why the involvement of mutations and natural selection should surprise so-called ill-informed anti-creationists?
Because they present a caricature of creationism that pretends that we believe in fixity of species.
So is this proof of particles-to-people evolution? Not at all. There is no proof that the changes increase genetic information.
In fact, the reverse seems to be true.
[CMIs] text slips into obvious falsehoods. The main point of the Evolution-program here is that (a) other species form large
part of the environment of one species,
Since when did we deny this? The problem is, this has nothing to do with particles-to-people evolution. So where is the
obvious falsehood?

(b) mutations, recombinations and natural selection is the clue how the species absorbs information from it's environment
thru generations following each other.
This is gobbledygook. There is no information from the environment to absorb! One wonders what meaning you assign to
the term information. I have a fair idea where you picked up this nonsense, and the source of this misinformation is refuted
in detail by the article The Problem of Information for the Theory of Evolution <www.trueorigin.org/dawkinfo.asp>.
(As a side point, it is very important to notice that this increase of information in a species does not conflict the Second Law
of Thermodynamics, as life on planet Earth is energetically open system.)
As a Ph.D. physical chemist, needing no instruction in thermodynamics, Im always amused by anticreationists, mainly
biologists and geologists, who think they know something about this topic, when they obviously dont. As I point out in The
Second Law of Thermodynamics Answers to Critics, an open system is necessary but not sufficient for an increase in
information content.
Since the PBS episode provides no explanation of the poisons activity, its fair to propose a possible scenario (it would be
hypocritical to object, since evolutionists often produce far more hypothetical just-so stories): suppose the poison normally
reacts with a particular neurotransmitter to produce something that halts all nerve impulses, resulting in death. But if the
snake had a mutation reducing production of this the neurotransmitter, so the poison has fewer targets to act upon. Another
possibility is a mutation altering its precise structure so that its shape no longer matches the protein.
Either way, the poison would be less effective. But either reduced production of the neurotransmitter or a less precise shape,
slow nerve impulses, meaning that muscle movement is slower.
Rather than producing these just-so stories and trying to dig an excuse to do so from evolutionists,
This was perfectly legitimate given the available information. It is far more legitimate than the evolutionary just-so stories that
you evidently tolerate, because my explanation of an information-losing mutation is based on the observed fact that the
more resistant snakes suffer from a disability.
could [CMI] PLEASE make a note that protein structures can be examined to see if these more-effective poisons actually
show loss of information or less-complicated structures than less-effective poisons.
Indeed they can be, and in every case they have shown a reduced specificity, which may be beneficial. So please provide
actual evidencepatronising assertions are unimpressive.
Indeed, I have come across the similar kind of (creationist) claim than yours about antibiotic resistence earlier. Finnish
creationist Dr. Pekka Reinikainen claimed that bacteria with better antibiotic resistence always show having less information
or simpler structure.
I havent heard of Dr Reinikainen, but from the limited amount of information you provide, he seems to know what hes
talking about.
An article from a popular science magazine, concerning this antibiotic-resistence showed that Reinikainen had it wrong.
Increase of information and new structural complexity has been observed in not just some, but in fact, many cases.
As will be shown, you have failed to demonstrate this in even one case! You would benefit by reading Dr Spetners book
(Not by chance!) and his more detailed explanations of information in terms of specified complexity (Part
1 <www.trueorigin.org/spetner1.asp> & Part 2<www.trueorigin.org/spetner2.asp>)True Origins site, also hyperlinked
on Q&A: Information.
The original magazine (which is not at hand now) was a Finnish popular scientific magazine Tiede 2000 i.e. Science 2000. It
had some non-technical examples of antibiotic resistance which however showed clearly that in many cases we cannot
honestly call the evolution of antibiotic resistance, "a loss of information". Instead, I have put (as an attachment) an article by
Petrosino, Cantu and Palzkoll, titled -Lactamases: protein evolution in real time
This was Trends in Microbiology 6(8):323327, August 1998. Some bacteria produce -Lactamases to destroy -Lactam
antibiotics, which include penicillin.
You may judge it and check if its always about loss of information as frequently claimed by some creationists. (Or maybe
you accept increased information by evolution in this case without any further problems your original article was about
poisonous newts, indeed.)
Right, I read this paper as you requested. But despite its title, it does not support your points, but ours! For example, one
mechanism featured in the article was acquisition of genes from other bacteria. I.e. the genes already existedhopefully it
should be obvious that this is irrelevant to the origin of these genes in the first place, which is what goo-to-you evolution is
supposed to explain! The other clue is the statement many of the mutations located around the active site pocket result in
increased catalytic activity for hydrolysis of extended-spectrum substrates. Mutations far from the active site also increase
extended spectrum catalysis. This provided an advantage to the bacteria containing these mutations, because they could
destroy more types of antibiotics. But here was yet another
example of an information loss conferring an advantage.
To understand this properly, its necessary to realize enzymes
are usually tuned very precisely to only onetype of molecule (the
substrate), and this fine-tuning is necessary for living cells to
function. Mutationsreduce specificity and hence would reduce
the effectiveness of its primary function, but would enable it to
degrade other substrates too. But this loss of specificity
means loss of information content. Dr Spetner analyzes this with
rigorous mathematics using standard definitions of information.
He presents the two extremes:An enzyme has activity for only
one substrate out of n possible ones and zero for the others
here the information gain is log2n.The second is where there is
no discrimination between any of the substrateshere the
information gain is zero.Real enzymes are somewhere in
between, and Dr Spetner shows how to calculate their
information. As explained above, living organisms require
enzymes to do a specific job, so their information content is very
close to the maximum in case 1. Quite close to the other extreme
Comparison of ribitol, xylitol and arabitolactivities of wild
are ordinary acids or alkalis, which hydrolyse many compounds.
and mutant ribitol dehydrogenase (from Lee
These have wonderful extended-spectrum catalytic activity, but
Spetner, True Origins website).
are not specific, so have low information content, so would be
useless for the precise control required for biological reactions.
All observed mutations reduce the specificity and trend towards the second extreme case. The trend described in the Lactamases is just the same as that described in ribitol dehydrogenase, the enzyme some bacteria use to metabolize ribitol,

a derivative of a type of sugar (left). That is, the mutant acquired the new ability to metabolize xylitol, so it was thought to be
an example of new information arising, and that it could trend towards a highly specific xylitol dehydrogenase. But on further
inspection, it turned out not only toreduce its ability to perform its original specific function of metabolizing ribitol, but also to
increase the ability to synthesize lots of other things, including arabitol. The trend is towards loss of specificity and producing
an ordinary broad-spectrum catalyst, i.e. from case 1 to case 2. A graph of wild v. mutant -Lactamase activity on various
antibiotics would be essentially the same as this graph of wild v. mutant ribitol dehydrogenase activity on the different types
of sugars.In conclusion, there is nothing to support any information gain at all. But evolution posits that the information
content of the simplest living organisms, the mycoplasma with 580,000 letters (482 genes), was increased to, say, the 3
billion letters equivalent in man. If this were so, we should be able to observe plenty of examples of information gain without
intelligent input. But we have yet to observe even one, including the example you cited.
WHAT IS THE DIFFERENCES BETWEEN ORDER AND COMPLEXITY
The treasures of the snow
Do pretty crystals prove that organization can arise spontaneously?
by Martin Tampier
Snow crystals are some of the most beautiful shapes
that nature has to offer, and no two flakes are alike.
Many evolutionists have tried to claim the order of a
crystal forming due to atomic structures as proof for
something coming out of nothing, due simply to
natural laws. But closer examination of this argument
shows it does not hold up to scientific scrutiny.
Modern snowflake research
Several scientists are trying to grow their own crystals
to understand and direct their development.
Applications of this research reach way beyond
meteorology, with the aim of controlling the growth of other crystals, such as silicon structures, for the semiconductor
industry.
[Snowflake] shape is due to the properties of their building blocks, the water molecules (H 2O).
So why do snow crystals form this shape? Does it require special design? No, their shape is due to the properties of their
building blocks, the water molecules (H 2O). These are bent and polar (i.e. with positively and negatively charged ends).
When they come together in solid form, they tend to form the lowest-energy structure they can, 1 which is crystals with
hexagonal (six-fold) symmetry.2 By contrast, carbon dioxide (CO2), a linear and more symmetrical molecule,
forms cubic crystals in its solid form (dry ice).We now know that not only temperature, but also humidity influences crystal
formation and shape. The beautiful six-legged star-like crystals grow in air warmer than -3C. Between -3C and -10C,
snow falls as little prisms. Between -10C and -22C, it is little stars again, and below that, prisms once more.
Nevertheless, scientists still cannot tell exactly why snow crystal shapes change so much with temperature. These shapes
depend on how water vapour molecules are incorporated into the growing ice crystal, and the physical processes governing
crystal growth are complex and not well understood yet.3
Snowflakesproof of evolution?4
Photo by Martin Tampier
Sometimes evolutionists claim that snowflakes show that order can
arise from disorder, and more complex structures from simple ones,
based purely on the inherent physical properties of matter. Therefore,
the reasoning goes, life could have arisen from simple molecules that
organize themselves in a way that ultimately leads to more complex
structures, and eventually the first living cell.5But crystals are nothing
like a living cell. Formed by the withdrawal of heat from water, they
are dead structures that contain no more information than is in their
component parts, the water
molecules. Life forms, on the
other
hand,
came
into
Fun stuff
existence,
evolutionists
An excellent snowflake website is
believe, through the addition of heat energy to some postulated primordial soup. Not
www.snowcrystals.com.
only are these processes very different, but life requires the emergence of new
You can download and use many
information (a code) in order to take over the functions of organization and
snowflake photos to create your
reproduction of a cell. There is therefore no analogy between snow crystals and the
own calendar, greeting card or
far, far greater complexity of living organisms.
other
present.
Apart
from
More importantly, the organization in proteins and DNA is not caused by the
beautiful photos, the site will tell
properties of the constituent amino acids and nucleotides themselves, any more
you just about everything you
than forces between ink molecules make them join up into letters and words.
ever wanted to know about
Michael Polanyi (18911976), a former chairman of physical chemistry at the
snowflakes.
University of Manchester (UK) who turned to philosophy, confirmed this:
As the arrangement of a printed page is extraneous to the chemistry of the printed
page, so is the base sequence in a DNA molecule extraneous to the chemical forces
at work in the DNA molecule. It is this physical indeterminacy of the sequence that produces the improbability of occurrence
of any particular sequence and thereby enables it to have a meaninga meaning that has a mathematically determinate
information content .6Snow crystals are not direct evidence for creation, either. Nevertheless, the philosophical argument
can be made that a universe without a designer cannot logically be expected to create such order out of disorder.7 So when
we observe order and design in the universe, as exemplified by the six-cornered snowflake, doesnt this demand a designer
who supplies this order and design?8Of course, the physical properties of water are known to be necessary preconditions for
life to exist on Earth, which testifies to a designer who conceived the universe and its physical laws as conducive to life. 9 For
example, snow forms an insulating layer on the ground that protects plants and animals below it from the much harsher
temperatures above. But whereas this could have been achieved with very simple shapes, such as round or square disks,
the lavish beauty and variety in snow crystals shows the designer`s loving creativity in making snow not only very useful,

but also wonderful to look at! As even evolutionists admit, One could almost convince oneself that snowflakes constitute a
demonstration of supernatural power.5
No two alike?
Actually, smaller snowflakes that take the shape of hexagonal prisms look pretty much the same. On the other hand, larger,
star-shaped crystals are all different. To understand why, think of how many different ways 15 books can be arranged on a
bookshelf. You have 15 choices for the first book, 14 for the second, 13 for the third, etc. The total number of possibilities is
thus 15 14 13 (15!), or over a trillion ways to arrange those books. Crystals can easily have 100 or more features that
can be recombined in different waysleading to at least a staggering 10158 different possibilities. This is 1070 times the
number of atoms in the entire universe!1
Adapted from www.its.caltech.edu/~atomic/snowcrystals/alike/alike.htm.

The Snowflake Man from Vermont


Astronomer Johannes Kepler seems to have been the first scientist to examine snow crystals. He wrote a booklet on the
subject in 1611.1 But the real Snowflake Man was Wilson Alwyn Bentley, born 1865 in Vermont, USA. Bentley was the first to
photograph snowflakes.2 He published more than 5,000 photographs, and wrote numerous articles on snow, rain, dew and
other natural phenomena related to water and precipitation.WA Bentley was the first to photograph snowflakes. He dedicated
his life to studying snow, dew and rain and although he was a farmer without formal scientific training, he was years ahead of
his time with his meteorological hypotheses.Bentley relates that it was his mother who instilled the love of scientific
investigation into him: he was home schooled until he was 14 years old, and in his quest for learning he even read an
encyclopedia! It was my mother that made it possible for me, at fifteen, to begin the work to which I have devoted my life.
She had a small microscope, which she had used in her school teaching. When the other boys of my age were playing with
popguns and sling-shots, I was absorbed in studying things under this microscope: drops of water, tiny fragments of stone, a
feather dropped from a birds wing, a delicately veined petal from some flower. But always, from the very beginning, it was
snowflakes that fascinated me most.Bentley knew nothing about photography and for the longest time could not manage to
take pictures of snowflakes. But through persistence and learning by trial and error he learned how to work rapidly before the
ice crystal changed shape, how to use transmitted light by pointing the camera to the sky, and how to get sharpness of detail
on the crystal by using a large f-stop. Finally, during a January snowstorm in 1885, he obtained the first photomicrographs
ever taken of an ice crystal.He kept detailed meteorological records, and pondered over the meaning of the shapes and
sizes of the crystals and why they often varied from one storm to the next. Starting in 1898, he published his findings in
scientific journals. Bentley greatly contributed to what is today common knowledge, i.e. that temperature changes and
movements in the storm clouds impact on the form and type of the crystals formed. With his research, he was years ahead of
the meteorological thinking of his time.Bentley loved people, but was misunderstood by them, and the scientific world
appreciated (or caught up with) the value of his work only much later. When he convened a meeting in his hometown to
present on his work, only six people attended.One of his National Geographic (Jan. 1923) articles, The magic beauty of
snow and dew, is accompanied by over 100 photomicrographs of ice crystals, frost patterns, and dew. 3 Although his photos
were sold for jewellery and other purposes, Bentley did not become rich through his work. But he said that he would not
change places with Ford or Rockefeller: he felt he was serving the Great Designer; capturing the evanescent loveliness
which, but for him, would be unappreciatedeven unseen by most of his fellow men. And with that role he was content.
When he died of pneumonia in 1931, his obituary read, Truly, greatness blooms in quiet corners and flourishes under
strange circumstances. For Wilson Bentley was a greater man than many a millionaire who lives in luxury of which the
Snowflake Man never dreamed.
Creation question: Snowflakes
Editors note: As Creation magazine has been continuously
Chilling Facts
published since 1978, we are publishing some of the articles from
Snow covers about 23 per cent of the Earth's
the archives for historical interest, such as this. For teaching and
surfacepermanently or temporarily.
sharing purposes, readers are advised to supplement these
The lowest air temperature ever recorded was at
historic articles with more up-to-date ones available by
Vostok II in Antarctica, 3,420 metres (11,218 feet)
searching creation.com.
above sea level. The temperature dropped to
Q:Snowflakes show beautiful design patterns, which appear highly
-88.3
degrees
Celsius
(-127
degrees
ordered, and which arise by themselves under simple freezing
Fahrenheit).The size and shape of snow crystals
conditions. Since this shows order arising from disorder, doesnt
depend mainly on the temperature of their
this mean that the ordered patterns of complex life could arise from
formation and the amount of water vapour
simpler chemicals?
However,
there is no tendency for simple organic molecules to
available at deposition. At temperatures between 0
A: In themselves
form
fact, there isinto
no parallel
the precise
between
sequences
the two needed
issues attoall.
form
To put
the
and 3 degrees Celsius, thin hexagonal plates
it simply, water
long-chain
information-bearing
forming snowflakes
molecules
is 'doing
found
whatincomes
living naturally',
systems.
form. Between -3 and -5 degrees, needles form. At
given isthe
That
because
properties
the properties
of the system.
of theThere
'finished
is no
product'
need for
are any
not
-25 to -30 degrees, the crystal shape is hollow
external information
programmed
in the components
or programming
of thetosystem.
be added
It takes
to the
thesystem
addition
prism.
thesome
of
existing
extra
properties
informationeither
of the waterbymolecule
an intelligent
and the
mind
atmospheric
at work or
conditions
a
programmed
are machine.
enough to
What
give
would
risebe
inevitably
analogous
to snowflake-type
is if you saw a
patterns.
doily
crocheted into the pattern of a snowflake. There is no natural,
spontaneous tendency for the components of the system (for example, wool or cotton fibres) to assume that shape. The
pattern has to be imposed by external informationeither by the operation of mind or a programmed machine.So whenever
you see a snowflake doily, you instinctively recognize this fact and see it as the result of creation, as you should when you
contemplate a section of a chromosomethe raw ingredients are not sufficient without a source of information. In living
things, that information has come from the parent organism (a programmed mechanism) which arose from its parent which
arose.... You might find that the doily has been crocheted by a programmed machine in a factory, which might itself have
been built by another machinebut eventually that information had to arise in amind. A snowflake pattern as water freezes
may appear beautiful, but it is not the same thing at all, because no external programming or information has to be applied.A
similar issue (sometimes raised by evolutionists who should know better) is that of salt crystal formation as a warm
1. ABCABCABCABCABCABCABC
2. A CAT SAT ON THE MAT
Both are 'ordered', but only type 2 resembles the ordering in, say, a protein molecule. Chop the first sequence in half, and
the two halves are essentially the same. Break a crystal of salt in two, and you see the same effect. Chop a protein (for

example haemoglobin) molecule in half and you no longer have haemoglobinthe two halves don't resemble one another.
That is because the ordering is like that in the type 2 example abovechop that sentence in half and it loses all its meaning.
To put it another way, as a salt crystal grows and grows, it is like continuing the type 1 sequence above. The sequence gets
longer, the crystal gets bigger (simply more of the same), but not more complex. For simple organisms to become more
complex (or simple chemicals to become a living thing) would be like the type 2 sentence becoming a whole story about
cats, for example.
CONCLUSION
To compare snowflake or salt crystal formation to any assumed evolutionary growth in complexity is like comparing chalk
with cheese. Examining the two simply highlights the need for external information before biological order will arisewhich
is a strong argument for creation.
HOW DOES INFORMATION THEORY SUPPORT CREATION?
Information, science and biology
by Werner Gitt
Summary
Energy and matter are considered to be basic universal quantities. However, the concept of information has become just as
fundamental and far-reaching, justifying its categorisation as the third fundamental quantity. One of the intrinsic
characteristics of life is information. A rigorous analysis of the characteristics of information demonstrates that living things
intrinsically reflect both the mind and will of their designer.
Information confronts us at every turn both in technological and in natural systems: in data processing, in communications
engineering, in control engineering, in the natural languages, in biological communications systems, and in information
processes in living cells. Thus, information has rightly become known as the third fundamental, universal quantity. Hand in
hand with the rapid developments in computer technology, a new field of studythat of information sciencehas attained a
significance that could hardly have been foreseen only two or three decades ago. In addition, information has become an
interdisciplinary concept of undisputed central importance to fields such as technology, biology and linguistics. The concept
of information therefore requires a thorough discussion, particularly with regard to its definition, with understanding of its
basic characteristic features and the establishment of empirical principles. This paper is intended to make a contribution to
such a discussion.
Information: a statistical study
With his (1948) paper entitled A Mathematical Theory of Communication, Claude E. Shannon was the first to devise a
mathematical definition of the concept of information. His measure of information which is given in bits (binary digits),
possessed the advantage of allowing quantitative statements to be made about relationships that had previously defied
precise mathematical description. This method has an evident drawback, however: information according to Shannon does
not relate to the qualitative nature of the data, but confines itself to one particular aspect that is of special significance for its
technological transmission and storage. Shannon completely ignores whether a text is meaningful, comprehensible, correct,
incorrect or meaningless. Equally excluded are the important questions as to where the information comes from (transmitter)
and for whom it is intended (receiver). As far as Shannons concept of information is concerned, it is entirely irrelevant
whether a series of letters represents an exceptionally significant and meaningful text or whether it has come about by
throwing dice. Yes, paradoxical though it may sound, considered from the point of view of information theory, a random
sequence of letters possesses the maximum information content, whereas a text of equal length, although linguistically
meaningful, is assigned a lower value.The definition of information according to Shannon is limited to just one aspect of
information, namely its property of expressing something new: information content is defined in terms of newness. This does
not mean a new idea, a new thought or a new item of informationthat would involve a semantic aspectbut relates merely
to the greater surprise effect that is caused by a less common symbol. Information thus becomes a measure of the
improbability of an event. A very improbable symbol is therefore assigned correspondingly high information content.Before a
source of symbols (not a source of information!) generates a symbol, uncertainty exists as to which particular symbol will
emerge from the available supply of symbols (for example, an alphabet). Only after the symbol has been generated is the
uncertainty eliminated. According to Shannon, therefore, the following applies: information is the uncertainty that is
eliminated by the appearance of the symbol in question. Since Shannon is interested only in the probability of occurrence of
the symbols, he addresses himself merely to the statistical dimension of information. His concept of information is thus
confined to a non-semantic aspect. According to Shannon, information content is defined such that three conditions must be
fulfilled:
Summation condition: The information contents of mutually independent symbols (or chains or symbols) should be
capable of addition. The summation condition views information as something quantitative.
Probability condition: The information content to be ascribed to a symbol (or to a chain of symbols) should rise as the level
of surprise increases. The surprise effect of the less common z (low probability) is greater than that of the more frequent e
(high probability). It follows from this that the information content of a symbol should increase as its probability decreases.
The bit as a unit of information: In the simplest case, when the supply of symbols consists of just two symbols, which,
moreover, occur with equal frequency, the information content of one of these symbols should be assigned a unit of
precisely 1 bit. The following empirical principle can be derived from this:
Theorem 1: The statistical information content of a chain of symbols is a quantitative concept. It is given in bits (binary
digits).According to Shannons definition, the information content of a single item of information (an item of information in this
context merely means a symbol, character, syllable, or word) is a measure of the uncertainty existing prior to its reception.
Since the probability of its occurrence may only assume values between 0 and 1, the numerical value of the information
content is always positive. The information content of a plurality of items of information (for example, characters) results
(according to the summation condition) from the summation of the values of the individual items of information. This yields
an important characteristic of information according to Shannon:
Theorem 2: According to Shannons theory, a disturbed signal generally contains more information than an undisturbed
signal, because, in comparison with the undisturbed transmission, it originates from a larger quantity of possible alternatives.
Shannons theory also states that information content increases directly with the number of symbols. How inappropriately
such a relationship describes actual information content becomes apparent from the following situation: If someone uses
many words to say virtually nothing, then, according to Shannon, in accordance with the large number of letters, this
utterance is assigned a very high information content, whereas the utterance of another person, who is skilled in expressing
succinctly that which is essential, is ascribed only a very low information content.Furthermore, in its equation of information
content, Shannons theory uses the factor of entropy to take account of the different frequency distributions of the letters.

Entropy thus represents a generalised but specific feature of the language used. Given an equal number of symbols (for
example, languages that use the Latin alphabet), one language will have a higher entropy value than another language if its
frequency distribution is closer to a uniform distribution. Entropy assumes its maximum value in the extreme case of uniform
distribution.
Symbols: a look at their average information content
If the individual symbols of a long sequence of symbols are not equally probable (for example, text), what is of interest is the
average information content for each symbol in this sequence as well as the average value over the entire language. When
this theory is applied to the various code systems, the average information content for one symbol results as follows:
In the German language: I = 4.113 bits/letter
In the English language: I = 4.046 bits/letter
In the dual system: I = 1 bit/digit
In the decimal system: I = 3.32 bits/digit
In the DNA molecule: I = 2 bits/nucleotide
The highest information density
The highest information density known to us is that of the DNA (deoxyribonucleic acid) molecules of living cells. This
chemical storage medium is 2 nm in diameter and has a 3.4 nm helix pitch (see Figure 1). This results in a volume of 10.68
x 10-21 cm3 per spiral. Each spiral contains ten chemical letters (nucleotides), resulting in a volumetric information density of
0.94 x 1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the four nucleotide bases, that is, adenine,
thymine, guanine and cytosine. The information content of such a letter is 2 bits/nucleotide. Thus, the statistical information
density is 1.88 x 1021 bits/cm3.Proteins are the basic substances that compose living organisms and include, inter alia, such
important compounds as enzymes, antibodies, haemoglobins and hormones. These important substances are both organand species-specific. In the human body alone, there are at least 50,000 different proteins performing important functions.
Their structures must be coded just as effectively as the chemical processes in the cells, in which synthesis must take place
with the required dosage in accordance with an optimised technology. It is known that all the proteins occurring in living
organisms are composed of a total of just 20 different chemical building blocks (amino acids). The precise sequence of
these individual building blocks is of exceptional significance for life and must therefore be carefully defined. This is done
with the aid of the genetic code. Shannons information theory makes it possible to determine the smallest number of letters
that must be combined to form a word in order to allow unambiguous identification of all amino acids. With 20 amino acids,
the average information content is 4.32 bits/amino acid. If words are made up of two letters (doublets), with 4 bits/word,
these contain too little information. Quartets would have 8 bits/word and would be too complex. According to information
theory, words of three letters (triplets) having 6 bits/word are sufficient and are therefore the most economical method of
coding. Binary coding with two chemical letters is also, in principle, conceivable. This however, would require a quintet to
represent each amino acid and would be 67 per cent less economical than the use of triplets.

Figure 1. The DNA moleculethe universal storage medium of natural systems. A short
section of a strand of the double helix with sugar-phosphate chain reveals its chemical
structure (left). The schematic representation of the double helix (right) shows the base pairs
coupled by hydrogen bridges (in a plane perpendicular to the helical axis).
Computer chips and natural storage media
Figures 1, 2 and 3 show three different storage technologies: the DNA molecule, the core memory, and the microchip. Lets
take a look at these.
Core memory: Earlier core memories were capable of storing 4,096 bits in an area of 6,400 mm 2 (see Figure 2). This
corresponds to an area storage density of 0.64 bits/mm 2. With a core diameter of 1.24 mm (storage volume 7,936 mm 3), a
volumetric storage density of 0.52 bits/mm3 is obtained.

Figure 2. Detail of the TR440 computers core-memory matrix


(Manufacturer: Computer Gesellschaft Konstanz).
1-Mbit DRAM: The innovative leap from the core memory to the semiconductor memory is expressed in striking figures in
terms of storage density; present-day 1-Mbit DRAMs (see Figure 3) permit the storage of 1,048,576 bits in an area of
approximately 50 mm2, corresponding to an area storage density of 21,000 bits/mm 2. With a thickness of approximately 0.5
mm, we thus obtain a volumetric storage density of 42,000 bits/mm 3. The megachip surpasses the core memory in terms of
area storage density by a factor of 32,800 and in terms of volumetric storage density by a factor of 81,000.

Figure 3. The 1-Mbit DRAMa dynamic random-access memory for


1,048,576 bits.
DNA molecule: The carriers of genetic information, which perform their biological functions throughout an entire life, are
nucleic acids. All cellular organisms and many viruses employ DNAs that are twisted in an identical manner to form double
helices; the remaining viruses employ single-stranded ribonucleic acids (RNA). The figures obtained from a comparison with
man-made storage devices are nothing short of astronomical if one includes the DNA molecule (see Figure 1). In this super
storage device, the storage density is exploited to the physico-chemical limit: its value for the DNA molecule is 45 x
1012 times that of the megachip. What is the explanation for this immense difference of 45 trillion between VLSI technology
and natural systems? There are three decisive reasons:The DNA molecule uses genuine volumetric storage technology,
whereas storage in computer devices is area-oriented. Even though the structures of the chips comprise several layers, their
storage elements only have a two-dimensional orientation.Theoretically, one single molecule is sufficient to represent an
information unit. This most economical of technologies has been implemented in the design of the DNA molecule. In spite of
all research efforts on miniaturisation, industrial technology is still within the macroscopic range.Only two circuit states are
possible in chips; this leads to exclusively binary codes. In the DNA molecule, there are four chemical symbols (see Figure
1); this permits a quaternary code in which one state already represents 2 bits.The knowledge currently stored in the
libraries of the world is estimated at 1018 bits. If it were possible for this information to be stored in DNA molecules, 1 per
cent of the volume of a pinhead would be sufficient for this purpose. If, on the other hand, this information were to be stored
with the aid of megachips, we would need a pile higher than the distance between the earth and the moon.
The five levels of information
Shannons concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to
understand the qualitative nature of information.Theorem 3: Since Shannons definition of information relates exclusively to
the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is
wholly unsuitable for the evaluation of chains of symbols conveying a meaning.In order to be able adequately to evaluate
information and its processing in different systems, both animate and inanimate, we need to widen the concept of

information considerably beyond the bounds of Shannons theory. Figure 4 illustrates how information can be represented
as well as the five levels that are necessary for understanding its qualitative nature.
Level 1: statistics
Shannons information theory is well suited to an understanding of the statistical aspect of information. This theory makes it
possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies.
However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical
correctness is completely excluded at this level.
Level 2: syntax
In chains of symbols conveying information, the stringing-together of symbols to form words as well as the joining of words
to form sentences are subject to specific rules, which, for each language, are based on consciously established
conventions. At the syntactical level, we require a supply of symbols (code system) in order to represent the information.
Most written languages employ letters; however, an extremely wide range of conventions is in use for various purposes:
Morse code, hieroglyphics, semaphore, musical notes, computer codes, genetic codes, figures in the dance of foraging
bees, odour symbols in the pheromone languages of insects, and hand movements in sign language.The field of syntax
involves the following questions:
Which symbol combinations are defined characters of the language (code)?
Which symbol combinations are defined words of the particular language (lexicon, spelling)?
How should the words be positioned with respect to one another (sentence formation, word order, style)? How should they
be joined together? And how can they be altered within the structure of a sentence (grammar)?

Figure 4. The five mandatory levels of information (middle) begin with statistics (at the lowest
level). At the highest level is apobetics (purpose).
The syntax of a language, therefore, comprises all the rules by which individual elements of language can or must be
combined. The syntax of natural languages is of a much more complex structure than that of formalised or artificial
languages. Syntactical rules in formalised languages must be complete and unambiguous, since, for example, a compiler
has no way of referring back to the programmers semantic considerations. At the syntactical level of information, we can
formulate several theorems to express empirical principles:
Theorem 4: A code is an absolutely necessary condition for the representation of information.
Theorem 5: The assignment of the symbol set is based on convention and constitutes a mental process.
Theorem 6: Once the code has been freely defined by convention, this definition must be strictly observed thereafter.
Theorem 7: The code used must be known both to the transmitter and receiver if the information is to be understood.
Theorem 8: Only those structures that are based on a code can represent information (because of Theorem 4). This is a
necessary, but still inadequate, condition for the existence of information.
These theorems already allow fundamental statements to be made at the level of the code. If, for example, a basic code is
found in any system, it can be concluded that the system originates from a mental concept.
Level 3: semantics
Chains of symbols and syntactical rules form the necessary precondition for the representation of information. The decisive
aspect of a transmitted item of information, however, is not the selected code, the size, number or form of the letters, or the
method of transmission (script, optical, acoustic, electrical, tactile or olfactory signals), but the message it contains, what it
says and what it means (semantics). This central aspect of information plays no part in its storage and transmission. The
price of a telegram depends not on the importance of its contents but merely on the number of words. What is of prime
interest to both sender and recipient, however, is the meaning; indeed, it is the meaning that turns a chain of symbols into
an item of information. It is in the nature of every item of information that it is emitted by someone and directed at someone.
Wherever information occurs, there is always a transmitter and a receiver. Since no information can exist without semantics,
we can state:
Theorem 9: Only that which contains semantics is information.
According to a much-quoted statement by Norbert Wiener, the founder of cybernetics and information theory, information
cannot be of a physical nature:Information is information, neither matter nor energy. No materialism that fails to take
account of this can survive the present day.The Dortmund information scientist Werner Strombach emphasises the nonmaterial nature of information when he defines it as an appearance of order at the level of reflective consciousness.
Semantic information, therefore, defies a mechanistic approach. Accordingly, a computer is only a syntactical device
(Zemanek) which knows no semantic categories. Consequently, we must distinguish between data and knowledge, between
algorithmically conditioned branches in a programme and deliberate decisions, between comparative extraction and

association, between determination of values and understanding of meanings, between formal processes in a decision tree
and individual selection, between consequences of operations in a computer and creative thought processes, between
accumulation of data and learning processes. A computer can do the former; this is where its strengths, its application areas,
but also its limits lie. Meanings always represent mental concepts; we can therefore further state:
Theorem 10: Each item of information needs, if it is traced back to the beginning of the transmission chain, a mental source
(transmitter).Theorems 9 and 10 basically link information to a transmitter (intelligent information source). Whether the
information is understood by a receiver or not does nothing to change its existence. Even before they were deciphered the
inscriptions in Egyptian obelisks were clearly regarded as information, since they obviously did not originate from a random
process. Before the discovery of the Rosetta Stone (1799), the semantics of these hieroglyphics was beyond the
comprehension of any contemporary person (receiver); nevertheless, these symbols still represented information.All suitable
formant devices (linguistic configurations) that are capable of expressing meanings (mental substrates, thoughts, contents of
consciousness) are termed languages. It is only by means of language that information may be transmitted and stored on
physical carriers. The information itself is entirely invariant, both with regard to change of transmission system (acoustic,
optical, electrical) and also of storage system (brain, book, computer system, magnetic tape). The reason for this invariance
lies in its non-material nature. We distinguish between different kinds of languages:
Natural languages: at present, there are approximately 5,100 living languages on earth.
Artificial or sign languages: Esperanto, sign language, semaphore, traffic signs.
Artificial (formal) languages: logical and mathematical calculations, chemical symbols, shorthand, algorithmic languages,
programming languages.Specialist languages in engineering: building plans, design plans, block diagrams, bonding
diagrams, circuit diagrams in electrical engineering, hydraulics, pneumatics.Special languages in the living world: genetic
language, the foraging-bee dance, pheromone languages, hormone language, signal system in a spiders web, dolphin
language, instincts (for example, flight of birds, migration of salmon).Common to all languages is that these formant devices
use defined systems of symbols whose individual elements operate with fixed, uniquely agreed rules and semantic
correspondences. Every language has units (for example, morphemes, lexemes, phrases and whole sentences in natural
languages) that act as semantic elements (formatives). Meanings are correspondences between the formatives, within a
language, and imply a unique semantic assignment between transmitter and receiver.Any communication process between
transmitter and receiver consists of the formulation and comprehension of the sememes (sema = sign) in one and the same
language. In the formulation process, the thoughts of the transmitter generate the transmissible information by means of a
formant device (language). In the comprehension process, the combination of symbols is analysed and imaged as
corresponding thoughts in the receiver.
Level 4: pragmatics
Up to the level of semantics, the question of the objective pursued by the transmitter in sending information is not relevant.
Every transfer of information is, however, performed with the intention of producing a particular result in the receiver. To
achieve the intended result, the transmitter considers how the receiver can be made to satisfy his planned objective. This
intentional aspect is expressed by the term pragmatics. In language, sentences are not simply strung together; rather, they
represent a formulation of requests, complaints, questions, inquiries, instructions, exhortations, threats and commands,
which are intended to trigger a specific action in the receiver. Strombach defines information as a structure that produces a
change in a receiving system. By this, he stresses the important aspect of action. In order to cover the wide variety of types
of action, we may differentiate between:Modes of action without any degree of freedom (rigid, indispensable, unambiguous,
program-controlled), such as program runs in computers, machine translation of natural languages, mechanised
manufacturing operations, the development of biological cells, the functions of organs;Modes of action with a limited degree
of freedom, such as the translation of natural languages by humans and instinctive actions (patterns of behaviour in the
animal kingdom);Modes of action with the maximum degree of freedom (flexible, creative, original; only in humans), for
example, acquired behaviour (social deportment, activities involving manual skills), reasoned actions, intuitive actions and
intelligent actions based on free will.All these modes of action on the part of the receiver are invariably based on information
that has been previously designed by the transmitter for the intended purpose.
Level 5: apobetics
The final and highest level of information is purpose. The concept of apobetics has been introduced for this reason by
linguistic analogy with the previous definitions. The result at the receiving end is based atthe transmitting end on the
purpose, the objective, the plan, or the design. The apobetic aspect of information is the most important one, because it
inquires into the objective pursued by the transmitter. The following question can be asked with regard to all items of
information: Why is the transmitter transmitting this information at all? What result does he/she/it wish to achieve in the
receiver? The following examples are intended to deal somewhat more fully with this aspect:Computer programmes are
target-oriented in their design (for example, the solving of a system of equations, the inversion of matrices, system
tools).With its song, the male bird would like to gain the attention of the female or to lay claim to a particular territory.With the
advertising slogan for a detergent, the manufacturer would like to persuade the receiver to decide in favour of its
product.Humans are endowed with the gift of natural language; they can thus enter into communication and can formulate
objectives.
We can now formulate some further theorems:
Theorem 11: The apobetic aspect of information is the most important, because it embraces the objective of the transmitter.
The entire effort involved in the four lower levels is necessary only as a means to an end in order to achieve this objective.
Theorem 12: The five aspects of information apply both at the transmitter and receiver ends. They always involve an
interaction between transmitter and receiver (see Figure 4).
Theorem 13: The individual aspects of information are linked to one another in such a manner that the lower levels are
always a prerequisite for the realisation of higher levels.
Theorem 14: The apobetic aspect may sometimes largely coincide with the pragmatic aspect. It is, however, possible in
principle to separate the two.
Having completed these considerations, we are in a position to formulate conditions that allow us to distinguish between
information and non-information. Two necessary conditions (NCs; to be satisfied simultaneously) must be met if information
is to exist:
NC1: A code system must exist.
NC2: The chain of symbols must contain semantics.
Sufficient conditions (SCs) for the existence of information are:
SC1: It must be possible to discern the ulterior intention at the semantic, pragmatic and apobetic levels (example: Karl v.
Frisch analysed the dance of foraging beesand, in conformance with our model, ascertained the levels of semantics,
pragmatics and apobetics. In this case, information is unambiguously present).

SC2: A sequence of symbols does not represent information if it is based on randomness. According to G.J. Chaitin, an
American informatics expert, randomness cannot, in principle, be proven; in this case, therefore, communication about the
originating cause is necessary.The above information theorems not only play a role in technological applications, they also
embrace all otherwise occurring information (for example, computer technology, linguistics, living organisms).
Information in living organisms
Life confronts us in an exceptional variety of forms; for all its simplicity, even a monocellular organism is more complex and
purposeful in its design than any product of human invention. Although matter and energy are necessary fundamental
properties of life, they do not in themselves imply any basic differentiation between animate and inanimate systems. One of
the prime characteristics of all living organisms, however, is the information they contain for all operational processes
(performance of all life functions, genetic information for reproduction). Braitenberg, a German cybernetist, has submitted
evidence that information is an intrinsic part of the essential nature of life. The transmission of information plays a
fundamental role in everything that lives. When insects transmit pollen from flower blossoms, (genetic) information is
essentially transmitted; the matter involved in this process is insignificant. Although this in no way provides a complete
description of life as yet, it touches upon an extremely crucial factor.Without a doubt, the most complex information
processing system in existence is the human body. If we take all human information processes together, that is, conscious
ones (language, information-controlled functions of the organs, hormone system), this involves the processing of 10 24 bits
daily. This astronomically high figure is higher by a factor of 1,000,000 than the total human knowledge of 10 18 bits stored in
all the worlds libraries.
The concept of information
On the basis of Shannons information theory, which can now be regarded as being mathematically complete, we have
extended the concept of information as far as the fifth level. The most important empirical principles relating to the concept
of information have been defined in the form of theorems. Here is a brief summary of them:1
No information can exist without a code.
No code can exist without a free and deliberate convention.
No information can exist without the five hierarchical levels: statistics, syntax, semantics, pragmatics and apobetics.
No information can exist in purely statistical processes.
No information can exist without a transmitter.
No information chain can exist without a mental origin.
No information can exist without an initial mental source; that is, information is, by its nature, a mental and not a material
quantity.
No information can exist without a will.
The marvellous message molecule
by Carl Wieland
When someone sends a message, something rather fascinating and mysterious gets passed along. Let's say Alphonse in
Alsace wants to send the message, 'Ned, the war is over. Al'. He dictates it to a friend; the message has begun as patterns
of air compression (spoken words). His friend puts it down as ink on paper and mails it to another, who puts it in a fax
machine. The machine transfers the message into a coded pattern of electrical impulses, which are sent down a phone line
and received at a remote Indian outpost where it is printed out in letters once again. Here the person who reads the fax
lights a campfire and sends the same message as a pattern of smoke signals. Old Ned in Nevada, miles away, looks up and
gets the exact message that was meant for him. Nothing physical has been transmitted; not a single atom or molecule of
any substance travelled from Alsace to Nevada, yet it is obvious thatsomething travelled all the way.This elusive something
is called information. It is obviously not a material thing, since no matter has been transmitted. Yet it seems to need matter
on which to 'ride' during its journey. This is true whether the message is in Turkish, Tamil or Tagalog. The matter on which
information travels can change, without the information having to change. Air molecules being compressed in sound waves;
ink and paper; electrons travelling down phone wires, semaphore signalswhateverall involve material mediums used to
transmit information, but the medium is not the information.This fascinating thing called information is the key to
understanding what makes life different from dead matter. It is the Achilles' heel of all materialist explanations of life, which
say that life is nothing more than matter obeying the laws of physics and chemistry. Life is more than just physics and
chemistry; living things carry vast amounts of information.Some might argue that a sheet of paper carrying a written
message is nothing more than ink and paper obeying the laws of physics and chemistry. But ink and paper unaided do not
write messagesminds do. The alphabetical letters in a Scrabble kit do not constitute information until someone puts
them into a special sequencemind is needed to get information. You can program a machine to arrange Scrabble letters
into a message, but a mind had to write the program for the machine.How is the information for life carried? How is the
message which spells out the recipe that makes a frog, rather than a frangipani tree, sent from one generation to the next?
How is it stored? What matter does it 'ride' on? The answer is the marvellous 'message molecule' called DNA. This molecule
is like a long rope or string of beads, which is tightly coiled up inside the centre of every cell of your body. This is the
molecule that carries the programs of life, the information which is transmitted from each generation to the next. *Some
people think that DNA is alivethis is wrong. DNA is a dead molecule. It can't copy itselfyou need the machinery of a
living cell to make copies of a DNA molecule. It may seem as if DNA is the information in your body. Not sothe DNA is
simply the carrier of the message, the 'medium' on which the message is written. In the same way, Scrabble letters are not
information until the message is 'imposed' on them from the 'outside'. Think of DNA as a chain of such alphabet letters
linked together, with a large variety of different ways in which this can happen. Unless they are joined in the right sequence,
no usable message will result, even though it is still DNA.Now to read the message, you need a pre-existing language code
or convention, as well as machinery to translate it. All of that machinery exists in the cell. Like man-made machinery, it does
not arise by itself from the properties of the raw materials. If you just throw the basic raw ingredients for a living cell together,
without information nothing will happen. Machines and programs do not come from the laws of physics and chemistry by
themselves. Why? Because they reflect information, and information has never been observed to come about by unaided,
raw matter plus time and chance. Information is the very opposite of chanceif you want to arrange letters into a sequence
to spell a message, a particular order has to be imposed on the matter.When living things reproduce, they transmit
information from one generation to the next. This information, travelling on the DNA from mother and from father, is the
'instruction manual' which enables the machinery in a fertilized egg cell to construct, from raw materials, the new living
organisma fantastic feat. This is in a new combination so that children are not exactly like their parents, although the
information itself, which is expressed in the make-up of those children, was there all along in both parents. That is, the deck
was reshuffled, but no new cards were added.Just how much space does DNA need to store its information? The
technological achievements of humankind in storing information seem sensational. Imagine how much information is stored
on a videotape of a movie, for exampleyou can hold it all in one hand. Yet compared to this, the feat of information

miniaturization performed by DNA is nothing short of mind-blowing. For a given amount of information, the room needed to
store it on DNA is about a trillionth of that for information on videotapei.e. it is a million million times more efficient at
storing information.1How much information is contained in the DNA code which describes you? Estimates vary widely. Using
simple analogies, based upon the storage space in DNA, they range from 500 large library books of small-type information,
to more than 100 complete 30 volume encyclopaedia sets. When you think about it, even that is probably not enough to
specify the intricate construction of even the human brain, with its trillions of precise connections. There are probably higherlevel information storage and multiplication systems in the body that we have not even dreamed of yetthere are many
more marvellous mysteries waiting to be discovered about the designer`s handiwork.Not only is the way in which DNA is
encoded highly efficienteven more space is saved by the way in which it is tightly coiled up. According to genetics expert
Professor Jrme Lejeune, all the information required to specify the exact make-up of every unique human being on Earth
could be stored in a volume of DNA no bigger than a couple of aspirin tablets! 2 If you took the DNA from one single cell in
your body (an amount of matter so small you would need a microscope to see it) and unravelled it, it would stretch to two
metres!This becomes truly sensational when you consider that there are 75 to 100 trillion cells in the body. Taking the lower
figure, it means that if we stretched out all of the DNA in one human body 3 and joined it end to end, it would stretch to a
distance of 150 billion kilometres (around 94 billion miles). How long is this? It would stretch right around the Earth's equator
three-and-a-half million times! It is a thousand times as far as from the Earth to the sun. If you were to shine a torch along
that distance, it would take the light, travelling at 300,000 kilometres (186,000 miles) every second, five-and-a-half days to
get there.But the really sensational thing is the way in which the information carried on DNA in all living things points directly
to intelligent, supernatural creation, by straightforward, scientific logic, as follows:
Observations
1. The coded information used in the construction of living things is transferred from pre-existing messages (programs),
which are themselves transmitted from pre-existing messages.
2. During this transfer, the fate of the information follows the dictates of message/information theory and common sense.
That is, it either stays the same, or decreases (mutational loss, genetic drift, species extinction) but seldom, probably never,
is it seen to increase in any informationally meaningful sense.
Deduction from observation No. 2
3. Were we to look back in time along the line of any living population, e.g. humans (the information in their genetic
programs) we would see an overall pattern of gradual increase the further back we go.
Axiom
4. No population can be infinitely old, nor contain infinite information. Therefore:
Deduction from points 3 and 4
5. There had to be a point in time in which the first program arose without a pre-existing programi.e. the first of that type
had no parents.
Further observation
6. Information and messages only ever originate in mind or in pre-existing messages. Never, ever are they seen to arise
from spontaneous, unguided natural law and natural processes.
Conclusion
The programs in those first representatives of each type of organism must have originated not in natural law, but in mind.
This is totally consistent with the model, which teaches us that the programs for each of the original 'kind' populations, with
all of their vast variation potential, arose from the mind of a designer at a point in time, by creation. These messages, written
in intricate coded language, could not have written themselves, as far as real, observational science can tell us.
Once the first messages were written, they also contained instructions to make machinery with which to transmit those
messages 'on down the line'. DNA, this marvellous 'message molecule', carries that special, non-material something called
information, down through many generations, from its origin in the mind of a designer.
More or less information? / Has a recent experiment proved creation?
Published: 17 February 2007 (GMT+10)
Photo by Jos A. Warletta, from www.sxc.hu
One
of
the
most
important
creationist
arguments
concerns information. Understanding this issue deflects many anticreationist equivocations, calling any change evolution. That is, no
creationist denies that things change, and even speciate, but nearly
all the cited changes do not involve the increase in information
content required for microbes-to-man evolution, but go in the wrong
direction. See one illustration: How information is lost when
creatures adapt to their environment.This weeks feedback comes
from Casey P who picked up from a website a vexatious question
about how to define information. The evolutionist who first posed
that question erred by presupposing a simplistic definition, while Andrew Lambs reply shows that there are more levels of
information needed to understand its role in biology.The second feedback addresses questions on a recent article about a
research scientist whose work supposedly proves creation. However,Jonathan Sarfati had replied to a similar query in 1999
about the same phenomenon, and it is updated below.
How do we define information in biology?
Photo by Dacho Dachovic, from www.sxc.hu

Im curious to know perhaps you could fill me in on this. Which one has the most information, and what exactly are these two
sequences?
Sequence 1: cag tgt ctt ggg ttc tcg cct gac tac Thanks for your help here. God bless.
Casey P
gag acg cgt ttg tct tta cag gtc ctc ggc cag cac Dear Mr P
Thank you for your email of 17 January, submitted via
ctt aga caa gca ccc ggg acg cac ctt tca gtg ggc our website.
In response to creationist arguments about genetic
act cat aat ggc gga gta cca agg agg cac ggt cca information, some evolutionists disingenuously object
that since there is no one measure of information
ttg ttt tcg ggc cgg cat tgc tca tct ctt gag att content applicable to all situations, therefore genetic
information doesnt exist! But even hardened atheists
like the eugenicistRichard Dawkins recognize that DNA
tcc ata ctt
Sequence 2: tgg agt tct aag aca gta caa ctc tgc contains information. In fact there is a burgeoning new
field of science called bio-informatics, which is all about
genetic information.
gac cgt gct ggg gta gcc act tct ggc cta atc tac With respect to the two sequences you presented, one
would need to know their functions before it would be
gtt aca gaa aat ttg agg ttg cgc ggt gtc ctc gtt possible to consider making a comparison about which
sequence carried more information. If their functions
agg cac aca cgg gtg gaa tgg ggg tct ctt acc aaa (assuming they were not just gobbledygook) were
dissimilar, then it would be fairly meaningless to attempt
ggg ctg ccg tat cag gta cga cgt agg tat tgc cgt a comparison of information content. For example if one
was a genetic sequence coding for an enzyme, and the
gat aga ctg
other a genetic sequence coding for a structural protein,
then to ask which has the most information would be as meaningless as asking, say, which has more information60
grams worth of apple or 60 grams worth of orange.
If the meaning/function is similar, then an information-content comparison may be possible. Consider the following two
sequences:
She
has
a
yellow
vehicle.
She has a yellow car.
Both are English sentences. The first is 25 characters long, and the second is 21 characters long. The first sentence has
more characters, but the second sentence has more information, because it is more specific (cars being just one of scores
of different types of vehicle), andspecificity is one measure of information content. Specificity only relates to the purpose of
the information, not to the way it is expressed or the size of the message when it is expressed in some particular
way/language.There are five levels of information content (after Information, Science and Biology by Dr Werner Gitt,
information scientist):
statistics (symbols and their frequencies)
syntax (patterns of arrangement of symbols)
semantics (meaning)
pragmatics (function/result/outcome)
apobetics (purpose/plan/design)
Specificity relates to the pragmatics or apobetics level.
Gitts Theorem 9 states that Only that which contains semantics is information. This is a crucial point. Many evolutionists err
by restricting information measurement to the statistical level, or to Shannon information. So-called Shannon information is
not a measure of information per se, but merely a measure of the minimum number of characters/units needed to represent
a sequence, regardless of whether the sequence is meaningful or not. Gobbledygook can have more Shannon information
than a sentence in English.So, if the two sequences you presented were composed randomly, then it is highly unlikely that
either contains any information. However, for arguments sake, I will assume that they may be meaningful, and compare
them.The two sequences both contain the same amount of statistical information, 240 characters worth, when represented
in text.Both sequences appear the same at the syntactical level, i.e. both consist of 60 spaced triplets composed of the
symbols c, a, t, and g.At the semantic level, I recognize that these letter triplets are the same as ones used to represent
triplets of DNA bases that code for particular amino acids. Since all 64 possible triplets have a meaning in the DNA code,
and since neither sequence contains any of the three stop codes (taa, tga, tag), it follows that both sequences could be
regarded as having the same amount of information at the semantic level, since, if processed by the appropriate genetic
machinery, both sequences could probably produce a segment of protein 60 amino acids in length.
However, when it comes to the pragmatics level, as far as I can determine (being unable to locate these sequences in a
gene library such asNCBIs Entrez Nucleotides) both sequences apparently carry the same amount of meaningful

informationzilch.At the apobetics level, I have no idea what outcomes would result from processing of the two sequences.
Conceivably, at one extreme, they could result in production of an enzyme that kills the cell, or even a toxin that kills the
organism to which the cell belongs. At the other extreme, they could (for all I know) prevent aging, thus extending the
lifespanI have no idea. Indeed, one of the most intractable problems in molecular biology is computing the final protein
configuration from an amino acid sequence (see a current project).Note also that each creature has its own unique set of
cellular machinery, so the outcomes that result from the reading of these genetic sequences could be very different
depending on which organisms genetic machinery reads them. For example the genetic sequence found in the HIV virus is
harmless when read by the cellular machinery in apes cells, but ultimately lethal when read by human cellular machinery
very different outcomes at the apobetics level from the same genetic sequence. Also, there are some organisms with
slightly different genetic codes, so the same semantic information would be read differently resulting in
different pragmatic and apobetic information.The final protein configuration that results from a particular DNA sequence is
mainly determined by cellular machines of a type calledchaperonins, which influence protein folding. Without chaperonins,
an important protein might mis-fold into a deadly prion. This is the likely cause of the fatal brain conditions CreutzfeldtJakob
disease and bovine spongiform encephalopathy (BSE) aka mad cow disease (Discoveries that undermine the one gene
one protein idea).I hope this helps. We have many articles on our website on the issue of information in living organisms.
They can be found listed under the topic Information Theory in the Frequently Asked Questions index on this website.
Yours sincerely
Andrew
Lamb
Information Officer
How is information content measured?
New plant coloursis this new information?
Is antibiotic resistance really due to increase in information?
The Problem of Information for the Theory of Evolution: Has Dawkins really solved it?
Lifes irreducible structurePart 1: autopoiesis
by Alex Williams
The commonly cited case for intelligent design appeals to: (a) the irreducible complexity of (b) some aspects of life. But
complex arguments invite complex refutations (valid or otherwise), and the claim that only someaspects of life are irreducibly
complex implies that others are not, and so the average person remains unconvinced. Here I use another principle
autopoiesis (self-making)to show that all aspects of life lie beyond the reach of naturalistic explanations. Autopoiesis
provides a compelling case for intelligent design in three stages: (i) autopoiesis is universal in all living things, which makes
it a pre-requisite for life, not an end product of natural selection; (ii) the inversely-causal, information-driven, structured
hierarchy of autopoiesis is not reducible to the laws of physics and chemistry; and (iii) there is an unbridgeable abyss
between the dirty, mass-action chemistry of the natural environmental and the perfectly-pure, single-molecule precision of
biochemistry. Naturalistic objections to these propositions are considered in Part II of this article.
Snowflake photos by Kenneth G. Libbrecht.
Figure 1. Reducible structure. Snowflakes
(left) occur in hexagonal shapes because
water crystallizes into ice in a hexagonal
pattern (right). Snowflake structure can
therefore be reduced to (explained in terms
of) ice crystal structure. Crystal formation is
spontaneous in a cooling environment. The
energetic vapour molecules are locked into
solid bonds with the release of heat to the
environment, thus increasing overall entropy
in accord with the second law of
thermodynamics.The commonly cited case
for intelligent design (ID) goes as follows:
some biological systems are so complex
that they can only function when all of their components are present, so that the system could not have evolved from a
simpler assemblage that did not contain the full machinery. 1 This definition is what biochemist Michael Behe
calledirreducible complexity in his popular book Darwins Black Box2 where he pointed to examples such as the bloodclotting cascade and the proton-driven molecular motor in the bacterial flagellum. However, because Behe appealed
to complexity, many equally complex rebuttals have been put forward, 3 and because he claimed that only some of the
aspects of life were irreducibly complex, he thereby implied that the majority of living structure was open to naturalistic
explanation. As a result of these two factors, the concept of intelligent design remains controversial and unproven in popular
understanding.In this article, I shall argue that all aspects of life point to intelligent design, based on what European
polymath Professor Michael Polanyi FRS, in his 1968 article in Science called Lifes Irreducible Structure.4 Polanyi argued
that living organisms have a machine-like structure that cannot be explained by (or reduced to) the physics and chemistry of
the molecules of which they consist. This concept is simpler, and broader in its application, than Behes concept of
irreducible complexity, and it applies to all of life, not just to some of it.
The nature and origin of biological design
Biologists universally admire the wonder of the beautiful designs evident in living organisms, and they often recoil in
revulsion at the horrible designs exhibited by parasites and predators in ensuring the survival of themselves and their
species. But to a Darwinist, these are only apparent designsthe end result of millions of years of tinkering by mutation
and fine tuning by natural selection. They do not point to a cosmic Designer, only to a long and blind process of survival of
the fittest.5 For a Darwinist, the same must also apply to the origin of lifeit must be an emergent property of matter. An
emergent property of a system is some special arrangement that is not usually observed, but may arise through natural
causes under the right environmental conditions. For example, the vortex of a tornado is an emergent property of
atmospheric movements and temperature gradients. Accordingly, evolutionists seek endlessly for those special
environmental conditions that may have launched the first round of carbon-based macromolecules 6 on their long journey
towards life. Should they ever find those unique environmental conditions, they would then be able to explain life in terms of
physics and chemistry. That is, life could then be reduced to the known laws of physics, chemistry and environmental

conditions.However, Polanyi argued that the form and function of the various parts of living organisms cannot be reduced to
(or explained in terms of) the laws of physics and chemistry, and so life exhibits irreducible structure. He did not speculate
on the origin of life, arguing only that scientists should be willing to recognize the impossible when they see it:The
recognition of certain basic impossibilities has laid the foundations of some major principles of physics and chemistry;
similarly, recognition of the impossibility of understanding living things in terms of physics and chemistry, far from setting
limits to our understanding of life, will guide it in the right direction.7
Reducible and irreducible structures
To understand Polanyis concept of irreducible structure, we must first look at reducible structure. The snowflakes in figure 1
illustrate reducible structure.Meteorologists have recognized about eighty different basic snowflake shapes, and subtle
variations on these themes add to the mix to produce a virtually infinite variety of actual shapes. Yet they all arise from just
one kind of moleculewater. How is this possible?
Figure 2. Irreducible structure. The silver
coins (left) have properties of flatness,
roundness and impressions on faces and
rims, that cannot be explained in terms of the
crystalline state of silver (close packed cubes)
or its natural occurrence as native silver
(right).When water freezes, its crystals take
the form of a hexagonal prism. Crystals then
grow by joining prism to prism. The elaborate
branching patterns of snowflakes arise from
the statistical fact that a molecule of water
vapour in the air is most likely to join up to its
nearest surface. Any protruding bump will thus tend to grow more quickly than the surrounding crystal area because it will be
the nearest surface to the most vapour molecules.8 There are six bumps (corners) on a hexagonal prism, so growth will
occur most rapidly from these, producing the observed six-armed pattern.Snowflakes have a reducible structure because
you can produce them with a little bit of vapour or with a lot. They can be large or small. Any one water molecule is as good
as any other water molecule in forming them. Nothing goes wrong if you add or subtract one or more water molecules from
them. You can build them up one step at a time, using any and every available water molecule. The patterns can thus all be
explained by (reduced to) the physics and chemistry of water and the atmospheric conditions.
Figure 3. Common irreducibly structured
machine components: lever (A), cogwheel (B)
and coiled spring (C). All are made of metal,
but their detailed structure and function cannot
be reduced to (explained by) the properties of
the metal they are made of.To now
understand irreducible structure, consider a
silver coin.Silver is found naturally in copper,
lead, zinc, nickel and gold oresand rarely, in
an almost pure form called native silver.
Figure 2 shows the back and front of two
vintage silver coins, together with a nugget of
the rare native form of silver. The crystal
structure of solid silver consists of closely
packed cubes. The main body of the native
silver nugget has the familiar lustre of the pure
metal, and it has taken on a shape that
reflects the available space when it was
precipitated from groundwater solution. The
black encrustations are very fine crystals of
silver that continued to grow when the rate of
deposition diminished after the main load of silver had been deposited out of solution.Unlike the case of the beautifully
structured snowflakes, there is no natural process here that could turn the closely packed cubes of solid silver into round,
flat discs with images of men, animals and writing on them. Adding more or less silver cannot produce the roundness,
flatness and image-bearing properties of the coins, and looking for special environmental conditions would be futile because
we recognize that the patterns are man-made. The coin structure is therefore irreducible to the physics and chemistry of
silver, and was clearly imposed upon the silver by some intelligent external agent (in this case, humans).Whatever the
explanation, however, the irreducibility of the coin structure to the properties of its component silver constitutes what I shall
call a Polanyi impossibility. That is, Polanyi identified this kind of irreducibility as a naturalistic impossibility, and argued that
it should be recognized as such by the scientific community, so I am simply attaching his name to the principle.Polanyi
pointed to the machine-like structures that exist in living organisms. Figure 3 gives three examples of common machine
components: a lever, a cogwheel and a coiled spring. Just as the structure and function of these common machine
components cannot be explained in terms of the metal they are made of, so the structure and function of the parallel
components in life cannot be reduced to the properties of the carbon, hydrogen, oxygen, nitrogen, phosphorus, sulphur and
trace elements that they are made of. There are endless examples of such irreducible structures in living systems, but they
all work under a unifying principle called autopoiesis.
Autopoiesis defined
Autopoiesis literally means self-making (from the Greek auto for self, and the verb poi meaning I make or I do) and it
refers to the unique ability of a living organism to continually repair and maintain itselfultimately to the point of reproducing
itselfusing energy and raw materials from its environment. In contrast, an allopoietic system (from the Greek allo for other)
such as a car factory, uses energy and raw materials to produce an organized structure (a car) which is
something other than itself (a factory). 9Autopoiesis is a unique and amazing property of lifethere is nothing else like it in
the known universe. It is made up of a hierarchy of irreducibly structured levels. These include: (i) components with perfectly
pure composition, (ii) components with highly specific structure, (iii) components that are functionally integrated, (iv)
comprehensively regulated information-driven processes, and (v) inversely-causal meta-informational strategies for
individual and species survival (these terms will be explained shortly). Each level is built upon, but cannot be explained in
terms of, the level below it. And between the base level (perfectly pure composition) and the natural environment, there is an

unbridgeable abyss. The enormously complex details are still beyond our current knowledge and understanding, but I will
illustrate the main points using an analogy with a vacuum cleaner.
A vacuum cleaner analogy
My mother was excited when my father bought our first electric vacuum cleaner in 1953. It consisted of a motor and
housing, exhaust fan, dust bag, and a flexible hose with various end pieces. Our current machine uses a cyclone filter and
follows me around on two wheels rather than on sliders as did my mothers original one. My next version might be the small
robotic machine that runs around the room all by itself until its battery runs out. If I could afford it, perhaps I might buy the
more expensive version that automatically senses battery run-down and returns to its induction housing for battery recharge.
Notice the hierarchy of control systems here. The original machine required an operator and some physical effort to pull the
machine in the required direction. The transition to two wheels allows the machine to trail behind the operator with little
effort, and the cyclone filter eliminates the messy dust bag. The next transition to on-board robotic control requires no effort
at all by the operator, except to initiate the action to begin with and to take the machine back to the power source for
recharge when it has run down. And the next transition to automatic sensing of power run-down and return-to-base control
mechanism requires no effort at all by the operator once the initial program is set up to tell the machine when to do its work.
If we now continue this analogy to reach the living condition of autopoiesis, the next step would be to install an on-board
power generation system that could use various organic, chemical or light sources from the environment as raw material.
Next, install a sensory and information processing system that could determine the state of both the external and internal
environments (the dirtiness of the floor and the condition of the vacuum cleaner) and make decisions about where to expend
effort and how to avoid hazards, but within the operating range of the available resources. Then, finally, the pice de
rsistance, to install a meta-information (information about information) facility with the ability to automatically maintain and
repair the life system, including the almost miraculous ability to reproduce itselfautopoiesis.Notice that each level of
structure within the autopoietic hierarchy depends upon the level below it, but it cannot be explained in terms of that lower
level. For example, the transition from out-sourced to on-board power generation depends upon there being an electric
motor to run. An electric vacuum cleaner could sit in the cupboard forever without being able to rid itself of its dependence
upon an outside source of powerit must be imposed from the level above, for it cannot come from the level below.
Likewise, autopoiesis is useless if there is no vacuum cleaner to repair, maintain and reproduce. A vacuum cleaner without
autopoietic capability could sit in the cupboard forever without ever attaining to the autopoietic stageit must be imposed
from the level above, as it cannot come from the level below.The autopoietic hierarchy is therefore structured in such a way
that any kind of naturalistic transition from one level to a higher level would constitute a Polanyi impossibility. That is, the
structure at level i is dependent upon the structure at level i-1 but cannot be explained by the structure at that level. So the
structure at level i must have been imposed from level i or above.
The naturalistic abyss
Most origin-of-life researchers agree (at least in the more revealing parts of their writings) 10 that there is no naturalistic
experimental evidence directly demonstrating a pathway from non-life to life. They continue their research, however,
believing that it is just a matter of time before we discover that pathway. But by using the vacuum cleaner analogy, we can
give a solid demonstration that the problem is a Polanyi impossibility right at the foundationlife is separated from non-life
by an unbridgeable abyss.
Dirty, mass-action environmental chemistry
The simple structure of the early vacuum cleaner is not simple at all. It is made of high-purity materials (aluminium, plastic,
fabric, copper wire, steel plates etc) that are specifically structured for the job in hand and functionally integrated to achieve
the designed task of sucking up dirt from the floor. Surprisingly, the dirt that it sucks up contains largely the same materials
that the vacuum cleaner itself is made ofaluminium, iron and copper in the mineral grains of dirt, fabric fibres in the dust,
and organic compounds in the varied debris of everyday home life. However, it is the difference in form and function of these
otherwise similar materials that distinguishes the vacuum cleaner from the dirt on the floor. In the same way, it is the
amazing form and function of life in a cell that separates it from the non-life in its environment.Naturalistic chemistry is
invariably dirty chemistry while life uses only perfectly-pure chemistry. I have chosen the word dirty chemistry not in order
to denigrate origin-of-life research, but because it is the term used by Nobel Prize winner Professor Christian de Duve, a
leading atheist researcher in this field.11 Raw materials in the environment, such as air, water and soil, are invariably
mixtures of many different chemicals. In dirty chemistry experiments, contaminants are always present and cause annoying
side reactions that spoil the hoped-for outcomes. As a result, researchers often tend to fudge the outcome by using
artificially purified reagents. But even when given pure reagents to start with, naturalistic experiments typically produce what
a recent evolutionist reviewer variously called muck, goo and gunk 12which is actually toxic sludge. Even our best
industrial chemical processes can only produce reagent purities in the order of 99.99%. To produce 100% purity in the
laboratory requires very highly specialized equipment that can sort out single molecules from one another.Another crucial
difference between environmental chemistry and life is that chemical reactions in a test tube follow the Law of Mass
Action.13Large numbers of molecules are involved, and the rate of a reaction, together with its final outcome, can be
predicted by assuming that each molecule behaves independently and each of the reactants has the same probability of
interacting. In contrast, cells metabolize their reactants with single-molecule precision, and they control the rate and
outcome of reactions, using enzymes and nano-scale-structured pathways, so that the result of a biochemical reaction can
be totally different to that predicted by the Law of Mass Action.
The autopoietic hierarchy
Perfectly-pure, single-molecule-specific bio-chemistry
The vacuum cleaner analogy breaks down before we get anywhere near life because the chemical composition of its
components is nowhere near pure enough for life. The materials suitable for use in a vacuum cleaner can tolerate several
percent of impurities and still produce adequate performance, but nothing less than 100% purity will work in the molecular
machinery of the cell.One of the most famous examples is homochirality. Many carbon-based molecules have a property
called chiralitythey can exist in two forms that are mirror images of each other (like our left and right hands) called
enantiomers. Living organisms generally use only one of these enantiomers (e.g. left-handed amino acids and right-handed
sugars). In contrast, naturalistic experiments that produce amino acids and sugars always produce an approximately 50:50
mixture (called a racemic mixture) of the left-and right-handed forms. The horrors of the thalidomide drug disaster resulted
from this problem of chirality. The homochiral form of one kind had therapeutic benefits for pregnant women, but the other
form caused shocking fetal abnormalities.The property of life that allows it to create such perfectly pure chemical
components is its ability to manipulate single molecules one at a time. The assembly of proteins in ribosomes illustrates this
single-molecule precision. The recipe for the protein structure is coded onto the DNA molecule. This is transcribed onto a
messenger-RNA molecule which then takes it to a ribosome where a procession of transfer-RNA molecules each bring a
single molecule of the next required amino acid for the ribosome to add on to the growing chain. The protein is built up one
molecule at a time, and so the composition can be monitored and corrected if even a single error is made.

Specially structured molecules


Life contains such a vast new world of molecular amazement that no one has yet plumbed the depths of it. We cannot hope
to cover even a fraction of its wonders in a short article, so I will choose just one example. Proteins consist of long chains of
amino acids linked together. There are 20 amino acids coded for in DNA, and proteins commonly contain hundreds or even
thousands of amino acids. Cyclin B is an averaged-size protein, with 433 amino acids. It belongs to the hedgehog group of
signalling pathways which are essential for development in all metazoans. Now there are 20 433 (20 multiplied by itself 433
times) = 10563 (10 multiplied by itself 563 times) possible proteins that could be made from an arbitrary arrangement of 20
different kinds of amino acids in a chain of 433 units. The human bodythe most complex known organismcontains
somewhere between 105 (= 100,000) and 106 (=1,000,000) different proteins. So the probability (p) that an average-sized
biologically useful protein could arise by a chance combination of 20 different amino acids is about p = 106 /10563 = 1/10557 .
And this assumes that only L-amino acids are being usedi.e. perfect enantiomer purity.14For comparison, the chance of
winning the lottery is about 1/10 6 per trial, and the chance of finding a needle in a haystack is about 1/10 11per trial. Even the
whole universe only contains about 1080 atoms, so there are not even enough atoms to ensure the chance assembly of even
a single average-sized biologically useful molecule. Out of all possible proteins, those we see in life are very highly
specializedthey can do things that are naturally not possible. For example, some enzymes can do in one second what
natural processes would take a billion years to do. 15 Just like the needle in the haystack. Out of all the infinite possible
arrangements of iron alloy (steel) particles, only those with a long narrow shape, pointed at one end and with an eye-loop at
the other end, will function as a needle. This structure does not arise from the properties of steel, but is imposed from
outside.
Water, water, everywhere
There is an amazing paradox at the heart of biology. Water is essential to life, 16 but also toxicit splits up polymers by a
process called hydrolysis, and that is why we use it to wash with. Hydrolysis is a constant hazard to origin-of-life
experiments, but it is never a problem in cells, even though cells are mostly water (typically 6090%). In fact, special
enzymes called hydrolases are required in order to get hydrolysis to occur at all in a cell. 17 Why the difference? Water in a
test tube is free and active, but water in cells is highly structured, via a process called hydrogen bonding, and this waterstructure is comprehensively integrated with both the structure and function of all the cells macromolecules:
The hydrogen-bonding properties of water are crucial to [its] versatility, as they allow water to execute an intricate threedimensional ballet, exchanging partners while retaining complex order and enduring effects. Water can generate small
active clusters and macroscopic assemblies, which can both transmit and receive information on different scales.18
Water should actually be first on the list of molecules that need to be specially configured for life to function. Both the vast
variety of specially structured macromolecules and their complementary hydrogen-bonded water structures are required at
the same time. No origin-of-life experiment has ever addressed this problem.
Functionally integrated molecular machines
Figure 4. ATP synthase, a proton-powered
molecular motor. Protons (+) from inside the
cell (below) move through the stator
mechanism embedded in the cell membrane
and turn the rotor (top part) which adds
inorganic phosphate (iP) to ADP to convert it
to the high-energy state ATP.It is not enough
to have specifically structured, ultra-pure
molecules, they must also be integrated
together into useful machinery. A can of
stewed fruit is full of chemically pure and
biologically useful molecules but it will never
produce a living organism19 because the
molecules have been disorganized in the
cooking process. Cells contain an enormous
array of useful molecular machinery. The
average machine in a yeast cell contains 5
component proteins,20 and the most complex
the spliceosome, that orchestrates the
reading of separated sections of genes
consists of about 300 proteins and several
nucleic acids.21One of the more spectacular
machines is the tiny proton-powered motor
that produces the universal energy molecule
ATP (adenosine tri-phosphate) illustrated in Figure 4. When the motor spins one way, it takes energy from digested food and
converts it into the high-energy ATP, and when the motor spins the other way, it breaks down the ATP in such a way that its
energy is available for use by other metabolic processes.22
Comprehensively regulated, information-driven metabolic functions
It is still not enough to have spectacular molecular machinerythe various machines must be linked up into metabolic
pathways and cycles that work towards an overall purpose. What purpose? This question is potentially far deeper than
science can take us, but science certainly can ascertain that the immediate practical purpose of the amazing array of life
structures is the survival of the individual and perpetuation of its species. 23 Although we are still unravelling the way cells
work, a good idea of the multiplicity of metabolic pathways and cycles can be found in the BioCyc collection. The majority of
organisms so far examined, from microbes to humans, have between 1,000 and 10,000 different metabolic
pathways.24 Nothing ever happens on its own in a cellsomething else always causes it, links with it or benefits or is
affected by it. And all of these links are multi-step processes.All of these links are also choreographed by informationa
phenomenon that never occurs in the natural environment. At the bottom of the information hierarchy is the storage
moleculeDNA. The double-helix of DNA is just right for genetic information storage, and this just right structure is
beautifully matched by the elegance and efficiency of the code in which the cells information is written there. 25 But it is not
enough even to have an elegant just right information storage systemit must also contain information. And not just
biologically relevant information, but brilliantly inventive strategies and tactics to guide living things through the extraordinary
challenges they face in their seemingly miraculous achievements of metabolism and reproduction. Yet even ingenious
strategies and tactics are not enough. Choreography requires an intricate and harmonious regulation of every aspect of life
to make sure that the right things happen at the right time, and in the right sequence, otherwise chaos and death soon

follow.Recent discoveries show that biochemical molecules are constantly moving, and much of their amazing achievements
are the result of choreographing all this constant and complex movement to accomplish things that static molecules could
never achieve. Yet there is no spacious dance floor on which to choreograph the intense and lightning-fast (up to a million
events per second for a single reaction26) activity of metabolism. A cell is more like a crowded dressing room than a dance
floor, and in a show with a cast of millions!
Inversely causal meta-information
The Law of Cause and Effect is one of the most fundamental in all of science. Every scientific experiment is based upon the
assumption that the end result of the experiment will be caused by something that happens during the experiment. If the
experimenter is clever enough, then he/she might be able to identify that cause and describe how it produced that particular
result or effect.Causality always happens in a very specific orderthe cause always comes before the effect.27 That is,
event A must always precede eventB if A is to be considered as a possible cause of B. If we happened to observe
that A occurred after B, then this would rule out A as a possible cause of B.In living systems however, we see the universal
occurrence of inverse causality. That is, an event A is the cause of event B, but A exists or occurs after B. It is easier to
understand the biological situation if we refer to examples from human affairs. In economics, for example, it occurs when
behaviour now, such as an investment decision, is influenced by some future event, such as an anticipated profit or loss. In
psychology, a condition that exists now, such as anxiety or paranoia, may be caused by some anticipated future event, such
as harm to ones person. In the field of occupational health and safety, workplace and environmental hazards can exert
direct toxic effects upon workers (normal causality), but the anticipation or fear of potential future harm can also have an
independently toxic effect (inverse causality).Darwinian philosopher of science Michael Ruse recently noted that inverse
causality is a universal feature of life,28 and his example was that stegosaur plates begin forming in the embryo but only
have a function in the adultsupposedly for temperature control. However most biologists avoid admitting such things
because it suggests that life might have purpose (a future goal), and this is strictly forbidden to materialists.The most
important example of inverse causality in living organisms is, of course, autopoiesis. We still do not fully understand it, but
we do understand the most important aspects. Fundamentally, it is meta-informationit is information about information. It is
the information that you need to have in order to keep the information you want to have to stay alive, and to ensure the
survival of your descendants and the perpetuation of your species.This last statement is the crux of this whole paper, so to
illustrate its validity lets go back to the vacuum cleaner analogy. Lets imagine that one lineage of vacuum cleaners
managed to reach the robotic, energy-independent stage, but lacked autopoiesis, while a second makes it all the way to
autopoiesis. What is the difference between these vacuum cleaners? Both will function very well for a time. But as the
Second Law of Thermodynamics begins to take its toll, components will begin to wear out, vibrations will loosen
connections, dust will gather and short circuit the electronics, blockages in the suction passage will reduce cleaning
efficiency, wheel axles will go rusty and make movement difficult, and so on. The former will eventually die and leave no
descendants. The latter will repair itself, keep its components running smoothly and reproduce itself to ensure the
perpetuation of its species.But what happens if the environment changes and endangers the often-delicate metabolic cycles
that real organisms depend upon? Differential reproduction is the solution. Evolutionists from Darwin to Dawkins have taken
this amazing ability for granted, but it cannot be overlooked. There are elaborate systems in placefor example, the diploid
to haploid transition in meiosis, the often extraordinary embellishments and rituals of sexual encounters, the huge number of
permutations and combinations provided for in recombination mechanismsto provide offspring with variations from their
parents that might prove of survival value. To complement these potentially dangerous deviations from the tried-and-true
there are also firm conservation measures in place to protect the essential processes of life (e.g. the ability to read the DNA
code and to translate it into metabolic action). None of this should ever be taken for granted.In summary, autopoiesis is the
informationand associated abilitiesthat you need to have (repair, maintenance and differential reproduction) in order to
keep the information that you want to have (e.g. vacuum cleaner functionality) alive and in good condition to ensure both
your survival and that of your descendants. In a parallel way, my humanity is what I personally value, so my autopoietic
capability is the repair, maintenance and differential reproductive capacity that I have to maintain my humanity and to share
it with my descendants. The egg and sperm that produced me knew nothing of this, but the information was encoded there
and only reached fruition six decades later as I sit here writing thisthe inverse causality of autopoiesis.
Summary
There are three lines of reasoning pointing to the conclusion that autopoiesis provides a compelling case for the intelligent
design of life.
If life began in some stepwise manner from a non-autopoietic beginning, then autopoiesis will be the end product of some
long and blind process of accidents and natural selection. Such a result would mean that autopoiesis is not essential to life,
so some organisms should exist that never attained it, and some organisms should have lost it by natural selection because
they do not need it. However, autopoiesis is universal in all forms of life, so it must be essential. The argument from the
Second Law of Thermodynamics as applied to the vacuum cleaner analogy also points to the same conclusion. Both
arguments demonstrate that autopoiesis is required at thebeginning for life to even exist and perpetuate itself, and could not
have turned up at the end of some long naturalistic process. This conclusion is consistent with the experimental finding that
origin-of-life projects which begin without autopoiesis as a pre-requisite have proved universally futile in achieving even the
first step towards life.
Each level of the autopoietic hierarchy is dependent upon the one below it, but is causally separated from it by a Polanyi
impossibility. Autopoiesis therefore cannot be reduced to any sequence of naturalistic causes.
There is an unbridgeable abyss below the autopoietic hierarchy, between the dirty, mass-action chemistry of the natural
environment and the perfect purity, the single-molecule precision, the structural specificity, and the inversely causal
integration, regulation, repair, maintenance and differential reproduction of life.
DAILY
DNA INFORMATION
Information Theorypart 1: overview of key ideas
by Royal Truman
The origin of information in nature cannot be explained if matter and energy is all there is. But the many, and often
contradictory, meanings of information confound a clear analysis of why this is so. In this, the first of a four-part series, the
key views about information theory by leading thinkers in the creation/evolution controversy are presented. In part 2,
attention is drawn to various difficulties in the existing paradigms in use. Part 3 introduces the notion of replacing Information
by Coded Information System (CIS) to resolve many difficulties. Part 4 completes the theoretical backbone of CIS theory,

showing how various conceptual frameworks can be integrated into this comprehensive model. The intention is to focus the
discussion in the future on whether CISs can arise naturalistically.
Intelligent beings design tools to help solve problems. These
tools can be physical or intellectual, and can be used and
reused to solve classes of problems.1 But creating separate
tools for each kind of problem is usually inefficient. In nature
many problems involving growth, reproduction and
adjustment to changes are performed with information-based
tools. These share the remarkable property that an almost
endless range of intentions can be communicated via coded
messages using the same sending, transmission and
receiving equipment.All known life depends on information.
But what is information and can it arise naturally? Naturalists
deny the existence of anything beyond matter, energy, laws
of nature and chance. But then where do will, choice, and
information come from?In the creation science and intelligent
design literature we find inconsistent or imprecise
understandings about what is meant by information. We
sometimes read that nature cannot create information. But in
other places, that some, but not enough, information could be
produced to explain the large amount observed today in
nature.Suppose a species of bacteria can produce five similar
variants of a protein which dont work very well for some
function, and another otherwise identical species produces
only a single, highly tuned version. Which has more
information?Consider a species of birds with white and grey
members. A catastrophe occurs and the few survivors only
produce white offspring from now on. Has information
increased or decreased?What about enzymes. Do they
possess more information when able to act on several different substrates or when specific to only one?In the creation
science and intelligent design literature we find inconsistent or imprecise understandings about what is meant by
information. The influence of Shannons Theory of InformationMost of the experts debating the origin of information rely on
the mathematical model of communication developed by the late Claude Shannon with its quantitative merits. 24 Shannons
fame began with publication of his masters thesis, which was called possibly the most important, and also the most
famous, masters thesis of the century. 5Messages are strings of symbols, like 10011101, ACCTGGTCAA, and go away.
All messages are composed of symbols taken from a coding alphabet. The English alphabet uses 26 symbols, the DNA
code, four, and binary codes use two symbols.In Shannons model, one bit of information communicates a decision between
two equiprobable choices and in general n bits between 2n equiprobable choices. Each symbol in an alphabet of s
alternatives can provide log2s bits of information.Entropy, H, plays an important role in Shannons work. The entropy of a
Source can be calculated by observing the frequency each symbol i is generated in messages:
where log is to the base 2 and p is the probability of each symbol i appearing in a message. For example, if both symbols of
an alphabet [0,1] are equiprobable, then eqn. (1) leads to:
Maximum entropy results when the symbols are equiprobable, whereas zero entropy indicates that the same message is
always produced. Maximum entropy indicates that we have no way of guessing which sequence of symbols will be
produced. In English, letter frequencies differ, so entropy is not maximum. Even without understanding English, one can
know that many messages will not be produced, such as sentences over a hundred letters long using only the letters z and
q.6Equations for other kinds of entropy, each with special applications, exist in Shannons theory: joint entropy, conditional
entropy (equivocation), and mutual information.Shannon devotes much attention to calculating the Channel Capacity. This is
the rate at which the initial message can be transmitted error-free in the presence of disturbing noise, and requires
knowledge of the probability that each symbol i sent will arrive correctly or be corrupted into another symbol, j. These errorcorrection measures require special codes to be devised, with additional data accompanying the original message.There
are many applications of Shannons theories, especially in data storage and transmission. 7 A more compact code could exist
whenever the entropy of the messages is not maximum, and the theoretical limit to data compression obtained by recoding
can be calculated.8 Specifically, if messages based on some alphabet are to be stored or transmitted and the frequency of
each symbol is known, then the upper compression limit for a new code can be known. 9Hubert Yockey is a pioneer in
applying Shannons theory to biology.1014 His work and the mathematical calculations have been discussed in this journal.15
Once it was realized that the genetic code uses four nucleobases, abbreviated A, C, G, and T, in combinations of three to
code for amino acids, the relevance of Information Theory became quickly apparent. Yockey used the mathematical
formalism of Shannons work to evaluate the information of cytochrome c proteins, 15 selected due to the large number of
sequence examples available. Many proteins are several times larger, or show far less tolerance to variability, as is the case
of another example Yockey discusses:The pea histone H3 and the chicken histone H3 differ at only three sites, showing
almost no change in evolution since the common ancestor. Therefore histones have 122 invariant sites the information
content of an invariant site is 4.139 bits, so the information content of the histones is approximately 4.139 122, or 505 bits
required just for the invariant sites to determine the histone molecule. 16Yockey seems to believe the information was frontloaded on to DNA about four billion years ago in some primitive organism. This viewpoint is not elaborated on by him and is
deduced primarily by his comments that Shannons Channel Capacity Theorem ensures transmission of the original
message correctly.It is unfortunate that the mysterious allusions 17 to the Channel Capacity Theorem were not explained. In
one part he wrote,But once life has appeared, Shannons Channel Capacity Theorem (Section 5.3) assures us that genetic
messages will not fade away and can indeed survive for 3.85 billion years without assistance from an Intelligent Designer. 18
This is nonsense. The Channel Capacity Theorem only claims that it is theoretically possible to devise a code with enough
redundancy and error-correction to transmit a message error-free. Increased redundancies (themselves subject to
corruption) are needed as the demand for accuracy increases, and perfect accuracy is achieved only at the limit of infinite

low effective transmission of the intended error-free message. Whether this is even conceivable using mechanical or
biological components is not addressed by the Channel Capacity Theorem. But the key point is that Yockey claims the
theorem assures that the evolutionary message will not fade away. He confuses a mathematical in principle notion with an
implemented fact. He fails to show what the necessary error-correcting coding measures would be and that they have been
actually implemented.In his latest edition, Yockey (or possibly an editor) was very hostile to the notion of an intelligent
designer. Tragically, his comments on topics like Behes irreducible complexity suggests he does not understand what the
term means. As one example, we read:mRNA acts like the reading head on a Turing machine that moves along the DNA
sequence to read off the genetic message to the proteasome. The fact the sequence has been read shows that it is not
irreducibly complex nor random. By the same token, Behes mouse trap is not irreducibly complex or random. 19The word
information is used in many ways, which complicates the discussion as to its origin. Yockeys work is often difficult to follow.
Calculations which are easily understood and can be performed effortlessly with a spreadsheet or computer program are
needlessly complicated by deriving poorly explained alternative formulations.20 Very problematic in his work is the difficulty in
understanding his multiple uses of the word information. For example, the entropy of iso-1-cytochrome c sequences is called
Information content.21 Then presumably the greater the randomness of these sequences, the higher the entropy and
therefore the higher the information content, right? That makes no sense, and is the wrong conclusion. But why, since
higher entropy of the Source (DNA) according to Shannons theory always indicates more information?I believe this is the
source of much confusion in the creationist and Intelligent Design literature which criticizes Shannons approach as
supposedly implying greater randomness always implies more information.Kirk Durston, a member of the Intelligent Design
community, improves considerably on Yockeys pioneering efforts. He correctly identifies the difference in entropy of all
messages generated by a Source, H0, and the entropy of those messages which provide a particular function, H f, as the
measure of interest. He calls this difference, H0Hf, functional information.22 This difference in entropies is actually used by
all those applying Shannons work to biological sequences, whether evolutionists or not, although this fact is not immediately
apparent when reading their papers.Entropies are defined by eqn. (1), but Yockeys approach has a conceptual flaw (and
implied assumption) which destroys his justification for using Shannons Information Theory with protein sequence
analysis.23 Truman already pointed out that Yockeys quantitative results are obtained within little more than a rounding-off
error with the same data, using much simpler standard probability calculations. 24The sum of the entropy contributions at
each position of a protein leads to H f. To calculate these site entropies, Durston aligned all known primary sequences of a
protein using the ClustalX program, and determined the proportion of each amino acid in the dataset, using eqn. (1). Large
datasets were collected for 35 protein families and the bits of functional information, or Fits, were calculated. Twelve
examples were found having over 500 Fits, or a proportion of <2500 = 3 10151 among random sequences. The highest
value reported was for protein Flu PB2, with 2416 Fits.Durstons calculations have one minor and one major weakness. To
calculate H0, he assumed amino acids are equiprobable, which is not true. This effect is not very significant, but indeed H 0 is
a little less random than he assumed. The other assumption is that of mutational context independence: that all mutations
which are tolerated individually are also acceptable concurrently. This is not the case, as Durston knows, and the result is
that the amount of entropy in Hf is much lower than he calculated. 15,25,26 The conclusion is that the protein families actually
contain far more Fits of functional information, and represent a much lower subset among random sequences. This effect is
counteracted somewhat by the fact that not all organisms which ever lived are represented in the dataset.Bio-physicist Lee
Spetner, Ph.D. from MIT, is a leading information theoretician who wrote the book Not by Chance.27 He is a very lucid
participant in Internet debates on evolution and information theory, and is adamant that evolutionary processes quantitatively
wont increase information. In his book, he wrote,I dont say its impossible for a mutation to add a little information. Its just
highly improbable on theoretical grounds. But in all the reading Ive done in the life-sciences literature, Ive never found a
mutation that added information. The NDT says not only that such mutations must occur, they must also be probable
enough for a long sequence of them to lead to macroevolution. 28Within Shannons framework, it is correct that a random
mutation could increase information content. However, one must not automatically conflate more information content with
good or useful.29,30Although Spetner says information could be in principle created or increased, Dr Werner Gitt, retired
Director and Professor at the German Federal Institute of Physics and Technology, denies this:Theorem 23: There is no
known natural law through which matter can give rise to information, neither is any physical process or material
phenomenon known that can do this.31In his latest book, Gitt refines and explains his conclusions from a lifetime of
research on information and its inseparable reliance on an intelligent source. 32 There are various manifestations of
information: for example, the spiders web; the diffraction pattern of butterfly wings; development of embryos; and an organplaying robot.33 He introduces the term Universal Information 34 to minimize confusion with other usages of the word
information:Universal Information (UI) is a symbolically encoded, abstractly represented message conveying the expected
actions(s) and the intended purposes(s). In this context, message is meant to include instructions for carrying out a specific
task or eliciting a specific response [emphasis added].35Information must be encoded on a series of symbols which satisfy
three Necessary Conditions (NC). These are conclusions, based on observation.
NC1: A set of abstract symbols is required.
NC2: The sequence of abstract symbols must be irregular.
NC3: The symbols must be presented in a recognizable form, such as rows, columns, circles, spirals and so on.
Gitt also concludes that UI is embedded in a five-level hierarchy with each level building upon the lower one:
statistics (signal, number of symbols)
cosyntics (set of symbols, grammar)
semantics (meaning)
pragmatics (action)
apobetics (purpose, result).
Gitt believes information is guided by immutable Scientific Laws of Information (SLIs). 36,37 Unless shown to be wrong, they
deny a naturalist origin for information, and they are:38
SLI-1: Information is a non-material entity.
SLI-2: A material entity cannot create a non-material entity.
SLI-3: UI cannot be created by purely random processes.
SLI-4: UI can only be created by an intelligent sender.
SLI-4a: A code system requires an intelligent sender.
SLI-4b: No new UI without an intelligent sender.
SLI-4c: All senders that create UI have a non-material component.
SLI-4d: Every UI transmission chain can be traced back to an original intelligent sender
SLI-4e: Allocating meanings to, and determining meanings from, sequences of symbols are intellectual processes.
SLI-5: The pragmatic attribute of UI requires a machine.
SLI-5a: UI and creative power are required for the design and construction of all machines.

SLI-5b: A functioning machine means that UI is affecting the material domain.


SLI-5c: Machines operate exclusively within the physical chemical laws of matter.
SLI-5d: Machines cause matter to function in specific ways.
SLI-6: Existing UI is never increased over time by purely physical, chemical processes.
These laws are inconsistent with the assumption stated by Nobel Prize winner and origin-of-life specialist Manfred Eigen:
The logic of life has its origin in physics and chemistry. 39 The issue of information, the basis of genetics and morphology,
has simply been ignored. On the other hand, Norbert Wiener, a leading pioneer in information theory, understood clearly
that, Information is information, neither matter nor energy. Any materialism that disregards this will not live to see another
day.40It is apparent that Gitt views Shannons model as inadequate to handle most aspects of information, and that he
means something entirely different by the word information.Arch-atheist Richard Dawkins reveals a Shannon orientation to
what information means when he wrote, Information, in the technical sense, is surprise value, measured as the inverse of
expected probability.41 He adds, It is a theory which has long held a fascination for me, and I have used it in several of my
research papers over the years. And more specifically,The technical definition of information was introduced by the
American engineer Claude Shannon in 1948. An employee of the Bell Telephone Company, Shannon was concerned to
measure information as an economic commodity. 42DNA carries information in a very computer-like way, and we can
measure the genomes capacity in bits too, if we wish. DNA doesnt use a binary code, but a quaternary one. Whereas the
unit of information in the computer is a 1 or a 0, the unit in DNA can be T, A, C or G. If I tell you that a particular location in a
DNA sequence is a T, how much information is conveyed from me to you? Begin by measuring the prior uncertainty. How
many possibilities are open before the message T arrives? Four. How many possibilities remain after it has arrived? One.
So you might think the information transferred is four bits, but actually it is two. 40In articles and discussions among nonspecialists, questions are raised such as Where does the information come from to create wings? There is an intuition
among most of us that adding biological novelty requires information, and more features implies more information. I suspect
this is what lies behind claims that evolutionary processes cannot create information, meaning complex new biological
features. Even Dawkins subscribes to this intuitive notion of information:Imagine writing a book describing the lobster. Now
write another book describing the millipede down to the same level of detail. Divide the word-count in one book by the wordcount in the other, and you have an approximate estimate of the relative information content of lobster and
millipede.40Stephen C. Meyer, director of the Discovery Institutes Center for Science and Culture and active member of the
Intelligent Design movement, relies on Shannons theory for his critiques on naturalism. 43,44 He recognizes that some
sequences of characters serve a deliberate and useful purpose. Meyer says the messages with this property exhibit
specified complexity, or specified information.45 Shannons Theory of Communication itself has no need to address the
question of usefulness, value, or meaning of transmitted messages. In fact, he later avoided the word information. His
concern was how to transmit messages error-free. But Meyer points out that molecular biologists beginning with Francis
Crick have equated biological information not only with improbability (or complexity), but also with specificity, where
specificity or specified has meant necessary to function.46I believe Meyers definition of information corresponds to
Durstons Functional Information.
Figure 1. Shannons schematic diagram of a
general communication system.2
William Dembski, another prominent figure in the
Intelligent Design movement, is a major leader in
the analysis of the properties and calculations of
information, and will be referred to in the next parts
to this series. He has not reported any analysis of
his own on protein or gene sequences, but also
accepts that H0Hf is the relevant measure from
Shannons work to quantify information.
In part 2 of this series Ill show that many things are
implied in Shannons theory that indicate an underlying active intelligence.
Thomas Schneider is a Research Biologist at the National Institutes of Health. His Ph.D. thesis in 1984 was on applying
Shannons Information Theory to DNA and RNA binding sites and he has continued this work ever since and published
extensively.17,47

Figure 2. The transmission of genetic message from the DNA tape to the protein tape, according to Yockey.17
Senders and receivers in information theories

There is common agreement that a sender initiates transmission of a coded message which is received and decoded by a
receiver. Figure 1 shows how Shannon depicted this and figure 2 shows Yockeys version.2,17
Figure 3. A comprehensive diagram of the five levels
of Universal Information, according to Gitt.32
A fundamental difference in Gitts model is the
statement that all levels of information, including the
Apobetics (intended purpose) are present in the
Sender (figure 3). All other models treat the Sender as
merely whatever releases the coded message to a
receiver. In Shannons case, the Sender is the
mindless equipment which initiates transmission to a
channel. For Yockey the Sender is DNA, although he
considers the ultimate origin of the DNA sequences
open. Gitt distinguishes between the original and the
intermediate Sender.48Humans intuitively develop
coded information systemsHumans interact with
coded messages with such phenomenal skill, most
dont even notice what is going on. We discuss
verbally with ease. Engineers effortlessly devise
various designs: sometimes many copies of machines
are built and equipped with message-based
processing resources (operating systems, drivers,
microchips, etc.). Alternatively, the hardware alone could be distributed and all the processing power provided centrally
(such as the dumb terminals used before personal computers). To illustrate, intellectual tools such as reading, grammar,
and language can be taught to many students in advance. Later it is only necessary to distribute text to the multiple human
processors.The strategy of distributing autonomous processing copies is common in nature. Seeds and bacterial colonies
already contain preloaded messages, ribosomes already possess engineered processing parts, and so on.
Conclusion
The word information is used in many ways, which complicates the discussion as to its origin. The analysis shows two
families of approaches. One is derived from Shannons work and the other is Gitts. To a large extent the former addresses
the how question: how to measure and quantify information. The latter deals more with the why issue: why is information
there, what is it good for?The algorithmic definition of information, developed by Solomonoff and Kolmogorov, with
contributions from Chaitin, is rarely used in the debate about origins and in general discussions about information currently.
For this reason it was not discussed in this part of the series.
Information Theorypart 2: weaknesses in current conceptual frameworks
by Royal Truman
The origin of information is a problem for the theory of evolution. But the wide, and often inconsistent, use of the word
information often leads to incompatible statements among Intelligent Design and creation science advocates. This hinders
fruitful discussion. Most information theoreticians base their work on Shannons Information Theory. One conclusion is that
the larger genomes of higher organisms require more information, and raises the question whether this could arise
naturalistically. Lee Spetner claims no examples of information-increasing mutations are known, whereas most ID advocates
only claim that not enough bits of information could have arisen during evolutionary timescales. It has also been proposed
that nature reflects the intention of the designer, and therefore all lifeforms might have the same information content. Gitt
claims information cant be measured. The underlying concepts of these theoreticians were discussed in part 1 of this
series. In part 3 a solution will be offered for the difficulties documented here.
Background: stock.xchng
Origin of life researcher Dr Kppers defines life
as matter plus information.1 Having a clear and
common understanding of what we mean by
information is necessary for a fruitful
discussion about its origin. But in part 1 of this
series I pointed out that various researchers of
evolutionary and creationist persuasion give
the word very different definitions.2 Creation
Magazine often draws attention to the need for
information-adding mutations if evolutionary
theory is true, for example:Slow and gradual
evolutionary modification of these crucial organs of movement would require many information-adding mutations to occur in
just the right places at just the right times. 3What does information mean? Williams introduced many useful thoughts in this
journal in a three-part series on biological information.46 Consistent with the usage of information above, he points out that
Creationists commonly challenge evolutionists to explain how vast amounts of new information could be produced that
would be required to turn a microbe into a microbiologist.7On the same page he adds, But the extra wings arose from three
mutations that switched off existing developmental processes. No new information was added. Nor was any new
capability/functionality achieved.Schneiders simulations only work because they were designed to do so, and are
intelligently guided. I understand and agree with the intuition behind this usage of the word information. Nevertheless, even
literature sold by Creation Ministries International, such as MIT Ph.D. Lee Spetners classic Not by Chance!,8 are not using
information in the same sense. Spetner is an expert on Shannons Theory of Communication (information) and is one of the
most lucid writers on its application.Sometimes creationists (e.g. Gitt) state that information cannot, in principle, arise
naturally whereas others (e.g. Stephen Meyer, Lee Spetner) are saying that not enough could arise for macro-evolutionary
purposes.2The view that not enough time was available to add the necessary information found in genomes (based on one
definition of information) becomes clouded when Williams argues that the Darwinian arguments are without force, since it is
clear that organisms are designed to vary.9,10 Behind this reasoning lies a different usage of information.
Williams even implies that information cannot be quantified at all:

a new, useful enzyme will not contain more information than the original system because the intention remains the same
to produce enzymes with variable amino acid sequences that may help in adapting to new food sources when there is
stress due to an energy deficit.9I believe that approach should be reconsidered, especially if intention is defined in such
generic, broad terms. Suppose the intention is to help ones daughter get better grades at school. The above suggestion
seemingly assigns the same amount of information whether a two-minute verbal explanation is offered, or years of private
tutoring over many topics.I believe most of Williams intuitions are right, but hope the model given in the third part of this
series will bring the pieces together in a more unified manner. Williams suggests that other codes are present in the cell
environment in addition to the one used by DNA. He once made the significant statement: We could, in theory, quantify this
information using an algorithmic approach, but for practical purposes it is enough to note that it is enormous and noncoded.11I agree that information can also be non-coded, but it is not apparent how an algorithmic measure of information
could be used, a topic Bartlett has devoted effort to. 12The precise definition of information has dramatic consequences on
the conclusions reached. Gitt believes information cannot be quantified. Others believed it can, and in exact detail. Weber,
Claude Shannons thesis supervisor, had this to say:It seems very reasonable to want to say that three relays could handle
three times as much information as one. And this indeed is the way it works out if one uses the logarithmic definition of
information.13When asked by creationists if he knew of any biological process that could increase the information content of
a genome, Dawkins could not answer the question.6,14 He subscribes to Shannons definition of information and understands
the issue at stake, writing later:Therefore the creationist challenge with which we began is tantamount to the standard
challenge to explain how biological complexity can evolve from simpler antecedents.15Several years ago Answers in
Genesis sponsored a workshop on the topic of information. Werner Gitt proposed we try to find a single formulation
everyone could work with. This challenge remains remarkably difficult, because people routinely use the word in different
manners.In 2009, Gitt offered the following definition for information in this journal,16 which at the advice of Bob Crompton he
now calls Universal Information (UI).Information is always present when all the following five hierarchical level are observed
in a system: statistics, syntax, semantics, pragmatics and apobetics.Let us call this Definition 1. Gitt also states that he now
uses UI and information interchangeably.17I have collaborated with Werner Gitt during the last 25 years of so on various
topics, and the comments which follow are not to be construed as criticism against him or his work. 18 At times it seems there
is a discrepancy between what he means and how it is expressed on paper. 19 Considerable refinement has occurred in his
thinking, and I hope to contribute by a critical but constructive attempt at further improvement.The variety of usages of the
word information continues to trap us. When Gitt wrote:Theorem 3. Information comprises the nonmaterial foundation for all
technological systems and for all works of art20andRemark R2: Information is the non-material basis for all technological
systems,21he appears to have switched to another (valid but different) usage of the word information. For example, it is not
apparent why valuable technologies like the first axe, shovel, or saw depended on the coded messages (statistics, syntax)
portion of his definition of information, a definition which seems to require all five hierarchical levels to be present.As another
example of inconsistent, or at least questionable, usage of the word, we read that the information in living things resides
on the DNA molecule.22The parts of the definition of information which satisfy apobetics (purpose, result) do not reside on
DNA. External factors enhance and interplay with what is encrypted and indirectly implied on DNA, but apobetics is not
physically present there. To illustrate, neuron connections are made and rearranged as part of dynamic learning, interacting
with external cues and input, but the effects are neither present nor implied on DNA.Another important claim needs to be
evaluated carefully. Gitt often states the premise thatThe storage and transmission of information requires a material
medium.23It is true that non-material messages can be coded and impregnated on material media. But information can be
relayed over various communication channels. Must all of them be material based? If so, then all, or virtually all,
theinformation-processing components in intelligent minds could only be material. Let us see why.Suppose one wishes to
translate ridiculous into German. The intention to translate, and the precise semantic concept itself, are surely encoded and
stored somewhere. This intention must be transmitted elsewhere to other reasoning facilities, where a search strategy will
also be worked out. All of this occurs before the search request is transmitted into the physical brain, but information is
already being stored and transmitted in vast amounts.Furthermore, is the mind/brain interface, part of the transmission path,
100% material?23 We begin to see that Gitts statement seems to imply that wilful decision making and the guidance of
decisions must be material phenomena.Now, as soon as the German word lcherlich is extracted from the biological
memory bank,24 it must be transferred from the brains apparatus into the wilful reasoning equipment and compared to the
information which prompted the search. A huge amount of mental processing (i.e. data storage and transmission) will now
occur: are the English and German words semantically synonymous for some purpose, or should more words be searched
for?Irrwitzig could be a new candidate, but which translation is better? What are all the associations linked to both German
words? Should more alternatives by looked up in a dictionary? Finally, decisions will be performed as to what to do with the
preferred translation (stored as the top choice and mentally transmitted to processing components where the intended
outcome will be planned).25More to the point, must angels, God, and the soul rely on a material medium to store and
transmit information?This objection is serious, because of the frequent statements that all forms of technology and art are
illustrations of information. An artist can wordlessly decide to create an abstract painting. Where are the statistics, syntax,
and semantics portion of the definition of UI? If in the mentally coded messages (which we read above must be material)
then either UI is material based or all aspects of created art and tool-making (technology) need not be UI.In part 3 of this
series Ill offer a simple solution to these issues.Gitt offers a new definition for UI in his 2011 book Without Excuse:Universal
Information (UI) is a symbolically encoded, abstractly represented message conveying the expected actions(s) and the
intended purposes(s). In this context, message is meant to include instructions for carrying out a specific task or eliciting a
specific response [emphasis added].26Let us call this Definition 2. This resembles one definition of information in Websters
Dictionary: The attribute inherent in and communicated by alternative sequences or arrangements of something that
produce specific effects.I dont believe Definition 2 is adequate yet. Only verbal communication seems to be addressed. It
implies that the symbolically encoded message itself must convey the expected actions and intended purposes, but in part 3
Ill show that this needs not be, and is probably never completely true. Sometimes the coded instructions themselves do
convey portions of the expected actions and purpose. This is observed when the message communicates how machines
are to be produced which are able to process remaining portions of the message (like DNA encoding the sequence data for
the RNA and proteins needed to produce the decoding ribosome machinery). I would agree that the messages often
contribute to, but do not necessarily themselves specify the purpose. Communicating all the necessary details would be
impractical.Consider the virus as an example. The expected actions(s) and the intended purpose(s) are not communicated
by the content of their genomes, nor are the instructions to decode the implied protein (the necessary ribosomes are
provided from elsewhere). Some viruses do provide instructions to permit insertion into the host genome and other
intermediary outcomes which can contribute to, but not completely specify, the final intended purposes.Another difficulty with
Definition 2 is that it does not distinguish between push and pull forms of coded interactions. The code message, What is
the density of benzene? could be sent to a database. This message, a pull against an existing data source, does not convey
the expected actions(s) or the intended purposes.Of the researchers discussed in part 1 of this series, Gitts model offers

the broadest framework for a theory of information for the purposes of analyzing the origin of life. He has refined his
thoughts continually over the years, but I fear the value will soon plateau out without the change of direction well see in part
3. One reason is that it wont permit quantitative conclusions.If an evolutionist is convinced that all life on Earth derived from
a single ancestor, then ultimately all the countless examples of DNA-based life are only the results of one single original
event. Therefore, Gitts elevation of his theorems to laws will seem weak compared to the powerful empirical and
mathematically testable laws of physics, for which so many independent examples can be found and validated
quantitatively.27 Im sure Gitts Scientific Laws of Information (SLI) will never be disproven because my analysis (introduced
in part 3 and 4 of this series) of what would be required to create code-based systems makes their existence without
intelligent guidance absurdly improbable. Others may find my reasoning in part 3 more persuasive than calling observed
code-based principles laws, since they seem to be based on such limited datasets.The contributions of other information
theoreticians are quantifiable. Although limited to the lower portions of Gitts five hierarchies, I find much merit in them, and
their ideas can be included as part of a general-purpose theoretic framework (see part 3). When Gitt wrote:To date,
evolutionary theoreticians have only been able to offer computer simulations that depend upon principles of design and the
operation of pre-determined information. These simulations do not correspond to reality because the theoreticians smuggle
their own information into the simulations.It is not clear, based on his own definition, what was meant by pre-determined
information. I will show in part 3 that the path towards pragmatics and apobetics can be aided with resources which do not
rely on the lower levels (statistics, syntax, and semantics). The notion of information being smuggled into a simulation is
widely discussed in the literature, and very competently by Dembski and Marks,28 who show how the contribution by
intelligent intervention can be quantified.Absurdly, Thomas Schneider claims his simulation begins with zero information 29
andThe ev model quantitatively addresses the question of how life gains information, a valid issue recently raised by
creationists (R. Truman, www.trueorigin.org/dawkinfo.htm) but only qualitatively addressed biologists.30,31Schneiders
simulations only work because they were designed to do so, and are intelligently guided. 32 This has been quantitatively
addressed by William Dembski.32 Furthermore, the framework in part 3 will show that Gitts higher levels can also be
quantified.
Gitts four most important Scientific Laws of Information, published in this journal,17,18 are:
SLI-1: A material entity cannot generate a non-material entity.
SLI-2: Universal information is a non-material fundamental entity.
SLI-3: Universal information cannot be created by statistical processes.
SLI-4: Universal information can only be produced by an intelligent sender.
Can we be satisfied that these are robustly formulated according to Definitions 1 and 2, above? For SLI-1 the question of
complete conversion of matter into energy should be addressed.
What about SLI-2 through SLI-4? I see no chance they would be falsified if we were to replace Universal information by
coded messages, which is integrated into UI. With a slight change in focus, introduced in part 3, I believe a stronger case
can be made.
SLI-2SLI-4 using Definition 1
For SLI-2 it is unclear what entity means, since the definition says, Information is always present when and the
grammar does not permit the thoughts to be linked. Since apobetics is not provided by the entity making use of DNA, this
definition still needs work. Nevertheless, the definition includes the thought in a system and this is a major move in the right
direction (see part 3).
SLI-3 surely cant be falsified, since the definition requires the presence of apobetics, which seems incompatible with
statistical processes. There seems to be a tautology here, since statistical processes describe outcomes with unknown
precise causes, whereas apobetics is a deliberate intention.
SLI-4 makes a lot of sense, but only if one understands UI to refer to a multi-part system and not an undefined entity.
SLI-2SLI-4 using Definition 2
For SLI-2 it is unclear what entity means, presumably the message. But it is questionable that the message must be
responsible to convey the expected actions(s) and the intended purpose(s). Decision-making capabilities could exist a priori
on the part of the receiver, who pulls a coded message from a sender, and then performs the appropriate actions and
purposes. The actions and purposes need not be conveyed by the message. Cause and effect here can be reversed.
Example 1. The receiver wishes to know what time it is. A coded message is sent back. The receiver alone decides what to
do with the content of the message.
Example 2. A rich man compares prices of various cars, airplanes, and motorboats. The coded information sent back
(prices) does not convey the expected actions(s) nor the intended purposes(s). The man provides the additional input, not
the message!
SLI-3 and SLI-4 make sense.
Value of Shannons work is underrated
Much criticism is voiced in the creation science literature about Shannons definition of information, which he preferred to
call communication, dealing as it does with only the statistical characteristics of the message symbols. Given Shannons
goal of determining maximum throughput possible over communication channels under various scenarios, it is true that the
meaning and intention of the messages play no role in his work.My concern is that I suspect his critics may have overlooked
some deeper implications which Shannon himself did not draw attention to. There are good reasons why all the researchers
mentioned8 in part 1 use expressions like, according to information theory or, the information content is when discussing
their analysis of biological sequences like proteins. Implicit in these researchers comments are notions of goals, purpose,
and intent. These are notions associated with information in generic, layman terms. Information theory inevitably refers to
Shannons work, even though the claims made about his work cannot be found directly in his own pioneering publications.
Are there reasons why the goal-directing effects of coded messages, like mRNA, remind us of Shannons information
theory? Is the wish to attain useful goals, intentionally implied in Shannons work? The answer is yes. Here are some
examples:Corruption of the intended message. The series of symbols (messages) transmitted can be corrupted in route.
Shannon devotes considerable effort in analysing the effects of noise and how much of the original, intended message can
be retrieved. But why should inanimate nature care what symbols were transmitted? Implicit is that there is a reason for
transmitting specific messages.Optimal use of a communication channel. If there are patterns in the strings of symbols to be
communicated, then better codes can often be devised. Suppose an alphabet consists of four symbols, used to
communicate sequences of DNA nucleotides (abbreviated A, C, G, or T) or perhaps to identify specific quadrants at some
location. Statistical analysis can reveal the probabilities, p, which need to be transmitted, e.g. A (p = 0.9), C (p = 0.05), G (p
= 0.04), and T (p = 0.01). We decide to devise a binary code. We could assign a two-bit codeword ( 00, 01, 10, 11) to each
symbol, so that on average a message requires two bits per symbol.People discuss frequently an immaterial entity called
information. Information Theory usually refers to Shannons work. The many alternative meanings of the word lead to
ambiguity, and detract from the issue of its origin. However, more compact codes could be devised for this example. Let us

invent one and assign the shorter codewords to the symbols which need to be transmitted more often: A = 0; C = 10; G =
111; T = 110. A message is easily decoded without needing any spacers. For example, 0010011000010 can only represent
AACATAAAC. On average, messages using this coding convention will have a length of 1 x 0.9 + 2 x 0.05 + 3 x 0.04 + 3 x
0.01 = 1.15 bits/ symbol, a considerable improvement.Implicit in this analysis is that it is desirable for some purpose to be
able to transmit useful content and to minimize waste of the available bandwidth. It is also implied that an intelligent
engineer will be able to implement the new code, an assumption which makes no sense in inanimate nature. 33Calculation of
joint entropy and conditional entropy. Various technological applications exist for mathematical relationships such as joint
and conditional entropies. Calculating these require knowing about the messages sent and those received. Nature has no
way or reason to do this. By performing these calculations one senses that intelligent beings are analyzing something and
for a purpose.Warren Weaver, Shannons mentor professor and co-author of the book edition published in 1949, discerned
that meaning and intentionality are implied in their work. In the portion he wrote we read,But with any reasonably broad
definition of conduct, it is clear that communication either affects conduct or is without any discernible and probable effect at
all.34And Gitts work is foreshadowed in insights like:
Relative to the broad subject of communication, there seems to be problems at three levels. Thus it seems reasonable to
ask, serially,
LEVEL A: How accurately can the symbols of communication be transmitted? (The technical problem.)
LEVEL B. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem.)
LEVEL C. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem.)35
Concern about Shannons initiative
Two reasons are often mentioned for claiming information theory has no relevance to common notions of information:
More entropy supposedly indicates more information. But how can this be, since a crystal with high regularity surely
contains much order and little information? And the chaos-increasing effects of a hurricane surely destroy organization and
information.
Longer messages imply more information. Really? Does the message Today is Monday provide less information than
Today is Monday and not Tuesday? Or less than Tdayy/$ *!aau!##$ is Modddndday?
These two objections, commonly encountered, reflect a weak understanding of the topic and prevent extracting a significant
amount of value available.
For purposes of creation-vs-evolution discussions, a good suggestion is to profit from the mathematics Shannon drew
attention to but avoid referring to information theory entirely. Shannon himself only used the phrase theory communication
later in his life. For most purposes we are interested in probability issues: how likely are naturalist scenarios, based on
specific mechanisms?
Generally, we can limit ourselves to three simple equations which are not unique contributions from Shannon.
The definition of entropy was already developed for the field of statistical thermodynamics:

H refers here to the entropy per symbol, such as the entropy of each of the four nucleotides on DNA.
The ShannonMacMillanBreitmann Theorem is useful to calculate the number of high-probability sequences of length N
symbols, having an average entropy H per symbol:

An example is shown in table 1. When the distribution of all


possible symbols, s, at each site is close to fully random, the
number of messages calculated by 2 NH and sN are
reasonably similar. The symbols could be amino acids,
nucleotides or codons. Eqn (2) is important for low entropy
sets, see table 1.Table 1. Example of eqn (2) to calculate the
number of high-probability sequences based on the entropy,
H*. The first column shows the probability of one amino acid
being found at a site and in the second column we assume
the remaining 19 amino acids are distributed equally. A
protein with N = 200 amino acids (AA) is assumed here.The
difference in entropy at each site along two sequences is of
paramount interest:where H0 is the entropy at the Source
and Hf at the Destination. To analyze proteins, these two
entropies refer to amino acid frequencies, calculated at each aligned site. The sequences used to calculate Hf perform the
same biological function. Following a suggestion by Kirk Durston, let us call H0 Hf the Functional Information36 at a site.
The sum over all sites is the Functional Information of the sequence.
What does eqn. (3) tell us?
If the entropy of a Source is
unchanged, the lower the
entropy which is observed
at a receiver, the higher the Functional Information involved (figure 1).
Figure 1. (Entropy of the Source) (Entropy of the Receiver) defines Functional
Information (FI) for a specific purpose. In A and B, HSource is the same, so FI for
case A is greater than for B. Dark circles represent different messages, or strings
of symbols.
On the other hand, if the entropy of a receiver is unchanged, the higher the
entropy which is observed at a source, the higher the Functional Information
involved (figure 2).
The ideas expressed in these three equations can be applied in various
manners. Suppose the location at which arrows land on a target is to be
communicated via coded messages (figure 3). A very general-purpose design
would permit all locations in three dimensions over a great distance to be
specified at great precision, applicable to target practice with guns, bows, or
slingshots. The entropy of the Source would now be very great.

Another design would limit what could be communicated to small square areas on a specific target, with one outcome
indicating the target was missed entirely. The demands on this Source would be much smaller, its entropy more limited, and
the messages correspondingly simpler.A variant design would treat each circle on the target as functionally equivalent,
restricting the range of potential outcomes which need to be communicated by the Source even more.To prepare our
thinking for biological applications, suppose the Source can communicate locations anywhere within 100 m to high
precision, and that we know very little about the target. We wish to know how much skill is implied to attain a bullseye.
Anywhere within this narrow range is considered equivalent. We are informed of every outcome and whether a bullseye
occurred. We can use eqn (1) to calculate H 0 for all locations communicated and the entropy of the bullseye, H f. Eqn. (3) is
the measure of interest, and eqn. (2) can be used to determine the proportion of desiredtonon-desired outcomes.
Figure 2. (Entropy of the Source) (Entropy of the Receiver) defines Functional
Information (FI) for a specific purpose. In A and B, HReceiver is the same, so FI
for case A is greater than for B. Dark circles represent different messages, or
strings of symbols.
Of much interest for creation research is the proportion of the bullseye region
represented by functional proteins. This is calculated as follows. A series of
sequences for the same kind of protein from different organisms are aligned, and
the probability of finding each of the 20 possible amino acids, p i, is calculated at
each site. The entropy at each site is then calculated using (1), the value of
which is Hf in eqn (3). The average entropy of all amino acids being coded by
DNA for all proteins is the H0 in eqn (3).
To these three equations let us add three suggestions:
Always be clear whether entropy refers to the collection of messages being
generated by the Source; the entropy of the messages received at the
Destination; or the resulting entropy of objects resulting from processing the messages received.
Take intentionality into account when interpreting entropies.
Work with bits no matter what code is used. A bit of data can communicate a choice between two possibilities; two bits, a
choice from among four alternatives; and n bits, a choice from among 2n possibilities. If the messages are two bits long and
each symbol (0 or 1) are equiprobable, it is impossible to specify correctly one of possible eight outcomes.
The symbols used by a code are part of its alphabet. The content of messages based on non-binary codes can also be
expressed in bits, and the messages could be transformed into a binary code. For example, DNA uses four symbols
(A,C,G,T), so each symbol can be specific up to 2 bits per position. Therefore, a message like ACCT represents 2 + 2 + 2 +
2 = 8 bits, so 28 = 256 different messages of length four could be created from the alphabet (A,C,G,T). This can be
confirmed by noting that 44 = 256 different alternatives are possible.
Figure 3. Coded messages are to communicate the location at which arrows land on a
target. Various designs are possible, depending on the intended use. Precise locations could
be communicated, or only the relevant circle, or only location within the bullseye circle. If
the outcomes are far from random, effective and highly compressed codes can be devised
to shorten the average length of messages sent with no loss of precision.We are now armed
to clarify some confusion and to perform useful calculations. The analysis is offered as the
on-line Appendix 1.37 Appendix 237 (also on-line) discusses whether mutations plus natural
selection could increase information, using a Shannon-based definition of information.
Conclusion
People discuss frequently an immaterial entity called information. Information Theory usually
refers to Shannons work. The many alternative meanings of the word lead to ambiguity, and detract from the issue of its
origin. What could be meant when one claims many copies of the same information does not increase its quantity? It cannot
refer to Shannons theory. Information in this case could mean things like the explanatory details or know-how to perform a
task, usable by an intelligent being. Shannons model, however, claims that two channels transmitting the same messages
convey twice as much information as only one would.What about a question like, Where does the information come from in
a cell or to run an automated process? Here information could mean the coded instructions or know-how which guide
equipment and leads to useful results.The discussion in parts 1 and 2 of this series is not meant to favour nor criticize how
others have chosen to interpret the word information. Many valuable insights can be gleaned for this literature. For purposes
of gaining a broader view of all the components involved in directing processes to a desired outcome, I felt the need to move
in another direction, which will be explained in parts 3 and 4.The evolutionary community is uncomfortable with the topic of
information, but the issue is easier to ignore when there is disagreement on very basic issues, such as whether it can be
quantified and whether higher life-forms contain more information or not.Covering so many notions with the same word is
problematic, and in part 3 a solution will be proposed.
Information Theorypart 3: introduction to Coded Information Systems
by Royal Truman
The literature about information is confusing because so many properties are described for supposedly a singular entity. The
discussion can be more fruitful once we realize we are studying systems with many components, one of which is a coded
message. We introduce the notion of a Coded Information System (CIS) and can now pose an unambiguous question:
Where do CISs come from?, which should be more precise than the vague alternative, Where does information come
from? We can develop a
model which is quantifiable by
focusing on the effects a CIS
has on organizing matter
through a sequential set of
refining steps.
In part 1 of this series1 I
demonstrated that there are
many usages of the word
information,
with
many
specialists working on different
notions. Dretske points out
that It is much easier to talk

about information than it is to say what it is you are talking about . It has come to be an all-purpose word, one with the
suggestive power to fulfil a variety of descriptive tasks.2 In part 2 of this series3 I drew attention to issues in various
information theoretic models which seem problematic. There seems to be a common intuition that information leads to a
desired outcome. But is information only vaguely (if at all) involved in attaining the intended goal (as implied by Shannons
theory4) or fully, as Gitt maintains? 5Coded messages play a prominent role in Gitts framework, 6,7 and are clearly
indispensable for the first three levels of his model (statistics, cosyntics, and semantics) but it is not apparent how symbolic
messages appear directly in the last two levels (pragmatics and apobetics). And what exactly is a coded message? The gun
fired to start a race consists of only one symbol. Statistics and cosyntics are missing, but meaning (semantics) is present.
Was a message sent? Is this information?Schneider claims8 to show with a computer program that information can arise for
free, autonomously, but Dembski argues decisively9 that the necessary resources were intelligently embedded into the
program in different ways, and shows that information is provided whenever a suitable search algorithm is selected from
among other possible ones.1015 Can these ideas be reconciled to permit a coherent discussion?Surely information is more
than mere causeeffect mechanics. Sometimes information is claimed to cause something via mechanical means. For
example, the direction and force generated by a billiard cue has been said to provide the information to guide the ball. But
all natural causes lead to some effect! So when is information involved? Surely information is more than mere causeeffect
mechanics.Now, machines are also used by living beings to achieve a goal. Some, like computers, work with coded
messages. What about a watermill which grinds grain into meal? The water provides the energy needed for the machine to
work. One could adjust the amount of force delivered upon the rotating wheel by changing the amount of water provided,
and the drop height. But there is no coded message in this kind of machine.Although the disagreements about how to define
information are rampant, virtually no one would argue the subject matter is vacuous, a meaningless debate of empty words.
Pioneering thinker Norbert Wiener stated correctly that Information is information, neither matter nor energy, but this left
unanswered what it is. And even experts vacillate between different meanings of the word, so readers might not know
exactly what is implied in each case. The confusion arises from a multitude (of sometimes only weakly related) ideas applied
to a single word, information.To illustrate, Gitt assigns both statistics and apobetics to information in living systems, but how
and where? On DNA? Statistics can indeed be discerned from gene sequences, but surely not from intended purpose
(apobetics). The goal does not reside on DNA, fully nor implied. As well see later, DNA is only one of multiple contributing
factors to produce an intended outcome.As a second example, one of Gitts Universal Laws of Information is: SLI-4c. Every
information transmission chain can be traced back to an intelligent sender. 16 In the same paper he also writes, Remark R3:
The storage and transmission of information requires a material medium. It seems that transmission chain must be referring
to coded messages, a stream of symbols on a physical medium. However, why must the other mandatory elements of his
information or Universal Information (semantics, pragmatics, and apobetics) reside on, and be transmitted by, a material
medium? Must an intelligent mind and all its parts be 100% material? I am not claiming there is contradiction in what Gitt
writes. In fact, I edited and endorsed his last book. 5 Careful consideration of his work reveals that information is somehow
distributed in separate, organized ensembles of matter, energy, and mind (e.g. the statistics vs the pragmatics portion) with
different properties and functions. This makes an answer to What is information? almost impossible. And it leads to a
struggle to find words with compound meanings to convey the multiplicity of functions assigned to information. His second
law states, SLI-2 Universal information is a non-material fundamental entity.Entity, in this statement, merely replaces
Universal Information, and one does not know what it might mean. It reflects the search for a missing, suitable explanatory
construct and therefore provides no additional insight beyond the phrase Universal information is non-material. But I
believe the simple proposal introduced below will retain almost all his views in a coherent manner.When someone asks,
Where does the information come from which causes a fertilized egg to become an adult? it seems that a series of linked,
guided processes are implied. Processes is plural, whereas a singular word, information, does not capture this intuition very
well.What needs to be explained?
Analysing the world around us, we note a family of phenomena which are not explained by deterministic law or random
behaviour. Examples include:
Birds migrate to specific locations during certain time periods.
Thousands of proteins are formed each minute in a cell and their concentrations and locations are carefully regulated.
A few bacteria can reproduce into a large colony, metabolizing nutrients to survive.
A foetus develops into an adult.
Caterpillars metamorphose into butterflies.
Assembly lines produce hundreds of cars each day.
A few years after lava devastates a landscape, a new ecology develops.
Text on a computer screen can be transferred to a printed sheet of paper.
Deaf people communicate with a sign language.
Satellites are sent to a planet and back.
Figure 1. Complex equipment
sends symbols to a Receiver able
to receive and process the coded
message. The shapes between
Message Sender and Message
Receiver represent symbols of a
coded message.
The
above
outcomes
occur
repeatedly, and what we observe
does
not
follow
naturalistic
(mechanical) principles. Some observations become readily apparent.
Observation 1. A series of linked processes are involved.
Observation 2. Members of these processes sequentially refine and contribute towards a goal.
Observation 3. A coded message is used somewhere along the chain of processes. Complex equipment generates a series
of symbols, usually embedded on a physical medium, 17 which another piece of complex equipment receives, resulting in a
measurable change in behaviour of a system attached to the Message Receiver (figure 1).
Observation 4. All these kinds of systems are associated with living organisms.
Making a fresh start
To uniquely specify our area of interest, we exclude all systems and machines which do not use a coded message
somewhere in the process. We are left with phenomena which have something to do with information, and we wonder
where such systems come from. But asking, Where does information come from? is too vague for our scientific enterprise.

Clearly we are observing systems, with many independent, but linked, components. We need a definition for these
message-based systems and then we need to consider how they could arise. Based strictly on observation, we make the
following definition:A Coded Information System (CIS) consists of linked tools or machines which refine outcomes to attain a
specific goal. A coded message plays a prominent role between at least two members of this linked series.CIS theory
recognizes Gitts five sequential processes: statistics, cosyntics, semantics, pragmatics, and apobetics.5
Messages vs sensors in CIS theory
Coded messages are formed by ordered codewords, 18 which themselves consist of symbols from a coding alphabet.
Messages must conform to the grammatical rules devised for that coding system.Cues or sensors are often found in a CIS
but should not be considered coded messages. For example, a sensor could be composed of two metal parts, the volumes
of which respond differently to temperature. When the temperature increases, selective expansion of one of the metals
causes the construct to bend, bringing the tip of the sensor into contact with a critical element to trigger an action (such as
by permitting a current to flow).Taste and smell receptors are unique to specific chemical structures, and are also sensors. If
interaction at a detector is a simple physical effect, and a signal is transmitted without an alphabet of symbols which are
independent of the carrier, then we have a sensor and not a coded message. 19 However, sensors are often valuable
components of a CIS, and signals received by sensors could be converted into coded messages, as will be shown later.As
one example, barn owls use two methods to localize sounds: the time differential between the arrival of a sound at each ear
(the interaural time) and the variance in the sounds intensity as it arrives at each ear.20 Are these cues, interacting directly
with the external physical factors, coded messages? Not at the point of external contact, which is based on strict physical
relationships, with no alphabet, nor grammar.As another example, a photoreceptor on a retina absorbs a photon, causing
11-cis-retinal to isomerize to 11-transretinal, which is followed by a signal cascade. This initially strictly physical behaviour is
characteristic of sensors. The location at which the photon lands on the retina determines in most cases where the signal
will be transferred to in the primary visual cortex of the occipital lobe. 21 The cue is transmitted over a neural pathway, and
eventually coded messages are involved to communicate with the occipital lobe. Why do we make this claim?There are
approximately 260 million photoreceptors on human retina. The initial signal gets transmitted a short distance, but these
signals are subsequently distributed among only 2 million ganglion cells. This compression of information suggests that
higher-level visual centres should be efficient processors to recover the details of the visual world. 22The signals originating
from the retina are processed by specialized neurons which perform distributed processing to determine object attributes
such as colour, location, and movement. Low-level algorithms are available, able to identify edges and corners. 23,24
Somehow the whole needs to be combined into a coherent whole, taking context into account. The underlying language is
not yet known, but rules are beginning to be identified, such as the use of AND operators. 25Coded messages could also
precede and activate a specific sensor. And sometimes activation of a sensor can be supplemented with other contextual
inputs which are subsequently coded into a message. To illustrate, a biochemical can dock onto a receptor (a sensor!) of a
cells outer membrane, leading to a complex cascade of internal processes, culminating in regulation of several genes. The
resulting process is part of a cellular language, the details of which are not fully elucidated. 26 In this case, the signal from a
sensor contributes input to a coded message.The following example shows how sensors could be integrated into a CIS.
Suppose four departments {A,B,C,D} at a university participate in races which occur hourly. During even-numbered hours
men race; on odd-numbered hours the women do.A scoreboard is divided into eight portions, representing the four
departments and the gender. Each time a sensor on one of the eight squares is activated, the value displayed increases by
one. This could be implemented in a mechanical, strictly cause-effect manner. The sensors are identical and so are the cues
received. So far there is no alphabet of symbols or syntax. Therefore, a coded message was not received at the scoreboard,
although something useful did result. Nevertheless, well show that a coded message could precede or follow the work of
the sensor-based equipment.
Figure 2. Coded messages can precede
or follow the use of sensors. The
winners
from
four
departments
{A,B,C,D} could be communicated by an
initial message, e.g. B C C B D A. The
Decoder determines from the time (even
or odd hours) whether each symbol
received represents a mans or womans
race. Both facts permit activating one of
eight boxes on a scoreboard. The eight
sensors can transmit a signal elsewhere,
and at the end of the transmission a new
codeword unique to each sensor is
generated. The new code, e.g.
0111011001 can then be transmitted or stored.Let us assume the winning department for each hour is communicated by a
judge, using a single symbol from the quaternary alphabet {A,B,C,D} which is transmitted towards the scoreboard (figure
2).27 The Decoder is also endowed with an internal clock, thereby permitting the winners gender to be identified. Now four
departments two genders, or eight outcomes, can be communicated, to one of the eight portions of the scoreboard, 28
although each symbol alone can only provide two bits of data. The winner can be communicated by transmitting an electric
signal through the relevant wire on to the correct one out of eight sensors on the scoreboard (figure 2). 29The codes are
independent of the physical infrastructure, as must always be true of informative codes. Suppose the winning department
and gender are to be communicated to another location afterward. The back end of each of the eight boxes in the
scoreboard first transmit a signal (not a message) along a cable. A coded message, unique to each original sensor, is then
produced by encoders (the circles preceding the triangle in figure 2) using a new binary code, with codewords such as
(0010), (1100), or (0111), unique to each wire. The new coded message identifies the same facts as the original quaternary
one {A,B,C,D}, supplemented by the winners gender, and this message can now be transmitted far away or stored
somewhere for future retrieval.Note how the specific assignment of A, B, C, or D to either of two out of eight boxes was
arbitrary and so was the assignment of specific triplet binary codewords to each sensor. The codes are independent of the
physical infrastructure, as must always be true of informative codes.I believe the simple example illustrates a general
principle in cellular systems. Methylation at specific location on DNA or phosphorylation of portions of proteins are simple
signals which get supplemented with other details and converted into coded messages.
Senders and receivers in CIS theory
The CIS model focuses on empirical measurements. A series of refining processes are observed, at least one of which
results from receiving coded messages. Observations 13 above are illustrated in figure 3. Notice that after processing the
message, additional refinements can occur, represented by the ever smaller contours in figure 3.

Figure
3.
Coded
Information
Systems
sequentially refine behaviour through a series of
processes. At least one process is guided by
coded
instructions.
Each
goal-directing
refinement step could be influenced through
coded messages, sensors, physical hardware, or
pre-existing resources such as data or logicprocessing algorithms.
The emphasis of the CIS approach is on
observing the modified range of behaviour of the
target system, unlike Shannons theory which
analyzes the statistical features of messages.
The effects caused by other sequential
refinement components, which can precede or
follow receipt of the message, are also evaluated
based on resulting consequences. This will be
elaborated on in part 4 of this series.Apropos
quantifying information, Shannons model is
unsuitable to evaluate prescriptive instructions. Suppose a robot is to extract trees from a forest. An algorithmic message is
sent, indicating how to find the largest tree within 50 metres of the Receiver, step by step. Statistical analysis of the series of
0s and 1s transmitted would be of little value, but the approach of CIS is to measure the resulting outcome empirically. It is
the contribution to producing the correct outcome which matters, when compared to the (theoretical) reference state, that
defines improvement, measured in bits.The intention thus far is to introduce a more nuanced manner to discuss and
measure information. The range of behaviour, weighted by observed probability, is compared for initial and a refined state,
for each contour in figure 2. There can be many ways these improvements can be engineered, using software and
hardware. Intelligent intervention, what some call smuggling information into a system, can now easily be taken into
account. For example, any artificial guidance to select a genetic or other algorithm to attain a specific outcome is an input
which improves over the preceding, unguided state.The precise, regulated designs used in biology and technology can be
understood and quantified with this simple CIS approach. The details themselves, like gene expression or metabolic
regulation,30 are often exquisitely sophisticated, but are in a sense only details which can be understood by drilling down
from the high-level concepts of the CIS model. We will defer a description of the many designs found in nature, the purpose
of which is to ensure the right outcomes in a CIS. 31CISs are created to organize matter and energy in precise manner at the
correct time and location, a very dynamical challenge which requires sophisticated components. These integrated systems
can typically be reused many times. The variety of unsuitable parts, which includes incorrect coded messages, greatly
outweighs the functionally acceptable ones.The motivation behind this analysis is to force researchers to consider
everything involved to permit a message-processing system, such as cells, to work. One of Trumans harshest critiques 32 of
the Avida setup and claims is that virtually everything necessary for the simulation to work, such as physical replication of
the electronic organisms, the energy source, physical transfer of data to the appropriate logic processing locations, and so
on, were machines already made available. They made decisive contributions to ensure the desired outcomes. In nature all
these components are coded for on DNA, and therefore subject to the ravages of random mutations. In Avida, mutations
cannot destroy nor disrupt most of the fundamental system components. Virtually everything relevant to information was
overlooked in the discussions. Forcing the participants to discuss the complete CIS should have prevented such
foolishness.One final notion in the CIS model is to distinguish between two kinds of receivers: mechanical receivers, which
respond deterministically to the messages instructions; and autonomously intelligent receivers, who first evaluate and
decide how to respond. Between these extremes lie a range of intermediate possibilities, including programmed artificial
intelligence programs designed to incorporate various forms of reasoning, and systems able to query for additional relevant
details from environmental sources.Part 4 will introduce the fundamental theorems associated with the CIS model, and show
that this framework incorporates the insights from Shannons theory, Gitts model, Dembskis contributions and other
schemes. But consistent use of the CIS notions does lead to some different conclusions than those proposed by other
frameworks.
Conclusion
The literature attempting to describe information is very broad. It is generally accepted to be non-material, and many
attributes are assigned to it. But it seems that people are generally referring to a system which contains physical
components, and not to a single entity. Analyzing components of a coded information system, such as coded messages,
signals, and physical hardware separately, solves several conceptual difficulties. And as will be further elaborated on in part
4, the effects produced by a CIS as a whole offer a means to quantify what is accomplished by portions, or the complete
CIS.
Information Theorypart 4: fundamental theorems of Coded Information Systems Theory
by Royal Truman
In parts 1 and 2 of this series the work of various information theoreticians was outlined, and reasons were identified for
needing to ask the same questions in a different manner. In Part 3 we saw that information often refers to many valid ideas
but that the statements reflect we are not thinking of a single entity, but a system of discrete parts which produce an
intended
outcome
by
using
different
kinds
of
resources.
We introduced in Part 3 the model for a new approach, i.e. that we are
dealing with Coded Information Systems (CIS). Here in Part 4 the
fundamental theories for CIS Theory are presented and we show that novel
conclusions are reached.
freeimages.com/flaivoloka
In Part 3 of this series1 we emphasized that the word information, although
singular, often refers to separate entities. This led to the notion that we are
often describing a system, parts of which involve coded messages.

Definition. A Coded Information System (CIS) consists of linked tools and machines designed to refine outcomes to attain a
specific goal. A coded message plays a prominent role between at least two members of this linked series.
Theorems to explain CISs
A series of theorems are presented next, to clarify what a Coded Information System (CIS) is. These are based on
observation and analysis of all coded information systems known to us.
Theorem 1. A CIS is used to organize matter and energy to satisfy an intended goal. All components of the system which
guide towards the final outcome, including timing and location, are part of a CIS.
The resulting organization of portions of the material world reflect intended goals. All the components involved in a series of
refinements to attain the final state are part of the CIS, and their effect must be quantitatively measurable, at least in
principle.2
Theorem 2. A CIS can be used to achieve a mental goal.
In the absence of wilful input, the organization of matter can be explained by deterministic laws of nature and statistical
principles of randomness. Mental processes, however, are not controlled deterministically or by randomness, and include:
making choices; seeking to understand; and developing a strategy.Suppose you are learning German, and are reflecting on
what Unsinn might mean. The intention to translate is surely not deterministic, nor explained by randomness. Perhaps the
intention is stored temporarily (physically?) in the brain, with which an immaterial you interacts almost instantly. You know
what you wish to do and can easily communicate this to others. This intention is converted somehow into a physical search
through the data stored in your neurons. This requires very special mental equipment, since a multitude of kinds of searches
are possible: for a discrete telephone number; for how a face looks; for a melody. The list is near endless. In this case were
searching for a concept which we believe reflects the meaning of Unsinn.The concept we seek to translate must be encoded
in some manner, and the searches directed efficiently. Suitable data must somehow be extracted from the neurons,
requiring further mental machinery, and the results must be encoded and transferred somewhere for the mind to evaluate.
Another tool then compares a candidate English word, the associations of which get compared with those of Unsinn. It is
absurd to argue neurotransmitter concentrations or electrical signals are being compared across billions of neurons. The
logical processing must involve some kind of compression and high-performance language. Eventually the mind decides
whether a potential translation of Unsinn, like nonsense, is correct or not.
All the resources involved in mental processes like these are part of a CIS, and some are not physical.
All the resources involved in mental processes like these are part of a CIS, and some are not physical. Various resources
narrow the range of possibilities, including when the translation is to occur and where.
Theorem 3. Coded messages do not arise from the properties of the physical carrier medium.
An implication which results is that the symbols used by the coding alphabet can appear in any order and combination,
whether the resulting messages serve a purpose or not. Ideally the carrier must not place any constraints on the potential
messages which could be created.3By our definition, a CIS must use a coded message at some point. Otherwise well treat
the phenomenon as a tool. Some coded messages provide step-by-step instructions on how to accomplish something.
Examples include computer programs and algorithms. These messages must be supplemented with hardware able to carry
out the instructions.Another class of coded messages only specify specific outcomes or choices, without any instructions on
how to attain them. A communication convention must be established a priori. The message 01101 might mean bring me
menu number thirty one, or pitch a curve ball. Combinations of these two extremes are possible, such as when a computer
program invokes a subroutine (or method or function) using parameter values.Coded messages permit intended outcomes
to be communicated between flexible tools and machines, which have been designed to solve a class of problems. This is
an efficient manner to use resources to solve problems. The alternative would be to build assembly lines of machines to
solve each individual problem and then to communicate which ensemble is required for each problem.Instead, to illustrate,
billions of dollars of complex logistics components of an overnight delivery service can be put to use flexibly by only
associating a coded delivery address to the object to be transported.The use of coded messages characterizes living
organisms, to control their development and response to novel situations; and to interact among each other. Humans devise
coding conventions with so little effort that few realize what an extraordinary feature this is. Being so fundamental to a wide
class of life-related observations, the presence of a coded message is a requirement for a system to be considered a CIS.
Theorem 4. The coded message does not provide the energy which causes the intended changes.The outcomes produced
by messages must not be caused only by the carrier medium, to distinguish from mere mechanical effects. The symbols in a
coding alphabet could indeed require different amounts of energy to be generated or processed. For example, in alphabet
{0, 1} the 0 could be communicated by lifting one arm, and the 1 by lifting both arms. But the energy to produce the symbols
must not lead to the resulting changes upon processing the message (e.g. by providing different levels of force, or resulting
momentum in a specific direction, caused directly by the symbol). Theorem 4 draws attention to the need for independent
components to be engineered for a CIS to work. Energy in the right form, time, and place must work with the intent
expressed by the message.
Theorem 5. Outcomes improved beyond what coded messages alone convey imply additional refining components are
involved. The additional contributions can be expressed quantitatively.
Figure 1. A jet interceptor is instructed to fly off in one of four possible directions.
This provides log2(4) = 2 bits of information.
An example in Part 3 of this series 1 revealed this principle. The coded message
communicated only four possible choices (two bits of information), but an internal
clock revealed in addition whether a race had been carried out during odd or even
hours, thereby indicating the gender of the winner. Therefore, the correct one out of
eight choices was able to be determined with the help of the clock.Example 1.
Assume the jets on an aircraft carrier are only told whether to fly off in one of four
quadrants, figure 1. Suppose careful observation shows that the pilots begin search
manoeuvres only once beyond a certain distance from the ship (therefore, the
central square in figure 1 was excluded).Although log2(4) = 2 bits of information can
only communicate the correct quadrant, we observe that the target is usually
identified, although located within a small portion within a quadrant. How is this
possible? Clearly additional refining components were available. Repeated
observation would allow the scientist to identify at least three sequentially refining components (without knowing anything a
priori about the details of the coded message): a) a coded message directs into one of four directions; b) searching begins
some distance from the carrier (there is prior knowledge that an alarm will occur when the enemy is still far away); c) there
are specific kinds of targets to search for, plus logic and special equipment to perform the searches. Example 2. Proteincoding portions of DNA are the messages which communicate the order in which each of twenty possible amino acids are

linked. But notice that almost always only l-form amino acids appear in the proteins. The choice of isomer was not
communicated by the mRNA messages, but optically pure amino acids were independently manufactured as feedstock. We
also observe that undesired chemical reactions amino acids normally undergo are prevented when forming the proteins. For
example, the side-chains of amino acids dont react together, nor do short five-to seven-membered chains form.4 This is true
because outcome-guiding equipment was deliberately included in the design.
Figure 2. Trade-offs between message complexity and engineering
design. A) The message communicates only the final destination.
Resources receiving the message must interpret it and have the
means to act upon the communicated intention. B) The message
communicates step-wise what is to be done. U = Up; L = Left one
unit. Now the messages are more complex but the equipment can
be simplified.
Other illustrations (examples A1A6) are offered online. Figure 2 is
explained in example A4 on-line.
Theorem 6. Receipt of a message often communicates more than
just the coded content.
The bits of information provided in a message provide an
incomplete picture from the point of view of resulting outcomes.
Although choices between alternatives can be communicated, the
changes which result occur in narrow time and location ranges. The
equipment receiving the message could be used more than once,
and the correct Receiver could be targeted, at the correct time and location.
Example 3. A zip code can communicate where to deliver a package, but when it is generated and on which object make a
difference! The intended goal must be taken into account.
Theorem 7. Refinement components are integrated into a sequence to produce the intended outcome.
Refinement components, designed to refine towards a goal, include:
received coded messages
engineered constraints or guidance
external cues
preloaded algorithms or reasoning resources.
The state of affairs achieved from one component of the CIS becomes the starting point for additional improvement by other
components.
Theorem 8. The quantitative contribution of Refining Components can be calculated by comparing ranges of behaviour
before and after the goal-directing activity.
A fundamental notion in CIS Theory is to identify the contribution provided by each discrete component in the processing
chain, by identifying the range of behaviour before the refinement and afterward. The theory applies to any kind of behaviour
in time and space. And the improvement can be due to receipt of a coded message and/or of other factors.The range of
possible outcomes will be represented by a discrete number n; the entropy H; L or a probability distribution function. We will
use some mathematical ideas developed by Shannon, as discussed in Part 3, 1 to define the Refinement Improvement5 as
Hbefore Hafter, which is measured in bits. This permits the improvement by each member of the chain, expressed in bits, to be
additive.
Theorem 9. Quantifying the contribution from a received message may require analysis of a single final state or of
intermediate ones along the way. It is necessary to evaluate what the intention is.
Figure 3. Convoluted messages could reveal incompetence or deliberate intention.
Upon processing the message, the vehicle here ends at the same place as shown in
figure 2, although the message and trajectory is now more complex. If the intention
was only to deliver to a specific location, then the message select one out of 64
possibilities, or 6 bits of information, would suffice. But if the trajectory had a
deliberate purpose, the outcome would have to be compared to the relevant reference
alternatives, taking each subgoal into account.Example 4. In the CIS methodology we
focus on behaviour which results from refining factors, and not the statistical details of
coded messages (which is Shannons methodology). Unlike Example A4 online,
suppose the intention of a message was to provide an itinerary, figure 3. Comparing
one final destination with all possible outcomes (1/64) would be wrong if the intention
was a milk run,6 like a parcel service delivering packages, or a path to transverse a
mine field, or to avoid incoming missiles. Then the result of each successful decision
would need to be compared to the relevant reference state and not the one-time final
destination.If the intention was a one-time delivery of a package to a final destination,
then the space of random possibilities around the starting point would be the
message-less state and would define the possible outcomes.Alternatively, if the
intention was to avoid in-coming missiles again and again, the random behaviour
around each decision point would define the reference state. Note that in these kinds
of analysis another resource is at play. In the random reference state, the vehicle has
no reason to move at all. Independent of the messages content, its receipt communicates that something is to be
done.7Often a message communicates more than necessary. This could be due to incompetence; or to refine or to correct
instructions already sent. It is also possible that other resources (logic, stored data) available to the Receiver indicate that
messages, or parts of them, are to be discarded. For example, the message to print a page with various colours could be
corrected by software which is aware that a colour cartridge is missing and only black and white outputs can be generated.
Theorem 10. Part of a CIS may permit behaviour to occur which otherwise wouldnt be observed. To quantify the
improvement provided by one of the CIS resources, a realistic hypothetical reference system behaviour needs to be
defined.
Example 5. DNA encodes the order in which amino acids link to form proteins. For this to occur, the carboxyl group at the
end of one amino acid must react with the amino group at the other end of another amino acid, to form peptides bonds. But
amino acids in free nature or a laboratory undergo a variety of other chemical reactions. As an example, amino and carboxyl
groups that are present on side-chains can also react. In addition, the carboxyl and amino acid ends of a growing

polypeptide will react in an intra-molecular fashion, creating cyclic rings; and other reactions also occur.In cells, clever
design prevents the wrong reactions from occurring. Computer simulations could be built to estimate the proportion of
protein-like chains which amino acids would form compared to all possible reactions, based on d and l racemic mixtures.8
Example 6. In water, peptides hydrolyze instead of forming long chains. For even a very short protein with 100 peptide
bonds (101 amino acids), the equilibrium concentration would be about 3 10 216.9 So how can proteins form at all in cells?
It is because water is excluded from the interior of the ribosomes, and energy is provided by ATP to drive the polymerization
reaction forward.These examples show it is impractical (and unnecessary) to always perform empirical studies on how
nature would react in the absence of a CIS. But a reasonable estimate is still useful, and the probabilities can easily be
converted into bits of information.10 It is usually sufficient to determine when a probability is so miniscule that nobody will
ever see the event unless something, like intelligence, provides a new pathway for it to occur.
Theorem 11. Individual Refinement Components can contain multiple improvement steps.
We saw in Examples 5 and 6 that specialized machinery like ribosomes can make multiple contributions. One can combine
contributions to simplify the analysis if one wishes.
Theorem 12. Accumulated goal refinements, defined by CIS in bits, reveal far more about what is accomplished than
Shannon Information Theory implies.
Molecular machines are built to solve thousands of kinds of problems such as catalyzing metabolic reactions, transporting
bio-chemicals, and replicating chromosomes. The quantitative CIS theory seeks to explain how so much more is
accomplished than is implied by the statistical studies of coded messages, such as of gene sequences.
Example 7. In the brain there are special kinds of cells called neurons, organized into specialized signal-processing
subsystems.11 These come in many sizes and shapes with very different designs and functions. There are about 10 11
neurons in the human brain and about 1014 synapses,12 which must be placed at the right locations, and interconnected
correctly. As an example, the Purkinje cells of the cerebellar cortex have about 200,000 synaptic contacts each. 13 The first
question is, where do the instructions come from to physically build such complex brains? And second, where does the input
come from which permits brains to make thousands of multimedia decisions each second?
These requirements cannot be explained by the bits of Shannon information implied on the chromosomes of the fertilized
egg. The coded information is embedded in a context which provides additional refinements, and the neurons are refined by
their ability to learn.
Theorem 13. There can be trade-offs in how a CIS can be designed. The contributions towards the goal, expressed in bits,
can be distributed between the message and the hardware equipment.
Examples A4 and A5 online illustrate this principle.
Example 8. Suppose 20 copies of five books are to be printed out. One solution would be for the message to transmit the
relevant text each time for every book, which flexible printing equipment must then process. Another solution would be to
build five machines, each of which mechanically prints out a single book. Now one only needs to send a signal to the
appropriate machine, 20 times, communicating to start printing. The final outcome is the same, but the effort, expressed in
bits according to resulting outcome, are distributed over different refining components.
Example 9. Printers can often handle papers of different standard formats. The content to be printed, plus instructions on
how to manipulate all the physical parts to position the paper and ink in the right position, could be part of a huge coded
message. A better design would be to engineer the printer to always position the paper for each standard size in the same
manner, so that the message only needs to communicate the content and paper size.
Example 10. Many business presentations benefit from the use of colour. The background colour and display for PowerPoint
presentations are communicated along with the content to be presented. This is a better design than to send content to a
large number of differently designed printers, each filled with paper prepared with a specific kind of coloured background.
Theorem 14. The hardware components found in an integrated CIS do not arise from the properties of the physical carrier
medium.
For example, many kinds of media can be used to store the same computer data. These materials could be made into
memory sticks, DVDs, hard disks, archival systems, etc. The origin of these engineered parts, as also for biological parts,
are not simple extrapolations of atomic properties. They have to be wilfully organized.
Theorem 15. The contribution towards a goal provided by a particular refinement component cannot be more than the
improvement observed, expressed in bits.
This is related to Theorem 5.
This simply means that guiding towards a goal cannot come for free. If there are eight equally likely outcomes,
communicating the correct one each time cannot be done with less than three bits of coded information.14 These must come
from somewhere. This theorem is intuitively obvious but woefully neglected in the evolutionary literature. Bartlett 15
recognized correctly that rapid change can, and does, occur in nature, if the guiding inputs have already been made
available and only need to be activated. The notion of preloading of information to ensure future outcomes is also common
among Intelligent Design thinkers.
Scientists realize intuitively that purposeful behaviour implies that guidance is coming from somewhere. The fact that the
same kinds of proteins always ended up in the same place in cells led researchers to look for special signals guiding this
process. And the rapid response of whole populations in short time periods to environmental changes led to the search for,
and discovery of, epigenetics.16 What is overlooked is the fundamental insight that planning and ensuring desired outcomes
are characteristics of intelligent agency, and that the methods used to store intent are not found anywhere in inanimate
nature.
Theorem 16. There is no direct relationship between goal refinement in bits and importance of the outcome.
Thumbs up or down decided life or death of a Roman gladiator. One mere bit of information, two possible outcomes, but with
a dramatic impact!
Theorem 17. Bits in CIS theory are not a direct indicator of difficulty in achieving the goal.
It is true that there is an inverse relationship between many bits in outcome, and likelihood the effect could arise by chance.
This is especially clear in CIS theory, where outcomes are compared to what would happen by natural processes. But one
must recall (Theorem 13) that there are trade-offs between what the message and the hardware could provide. A simple
message to an aircraft carrier flotilla to turn left or turn right represents only one bit of information because the rest of the
details necessary are handled by other parts of the CIS. These one-bit coded messages have a huge lever effect. If the
design of two CISs have identical final outcomes, then a comparable number of bits should be calculated. But focusing on
the number of bits provided by intermediate CIS services can be misleading.
Theorem 18. Wilful, intelligent decision-making occurs during the processing of a CIS; or decision-making has been preloaded for it to occur autonomously.
In some CISs, parts can respond mechanically, whereas in other CIS designs, intelligent decision-making is involved. In the
mechanical version, intelligence is used to ensure intended outcomes. Complex algorithms can be devised to free active

intelligence from having to be present during future execution of a CIS. Examples are techniques used in artificial
intelligence. In addition, sensor and queries to the environment can be automated to ensure reliability of the automated
portions of a CIS.
Theorem 19. The performance of a CIS will not improve over time in the absence of intelligently provided guidance.
Refinement in the outcome or adjustment to new circumstances requires preloaded facilities in some part of the CIS. One
must not overlook, however, that improvement is possible through an algorithmic, iterative process of selection. This occurs
for -cell maturation17 and there are many examples in numerical analysis, like Runge-Kutta methods. 18 Natural selection
could conceivably be an example, if a small number of organisms, like bacteria, were initially created with the intent of
diversifying and specializing. But for this strategy to work, outcomes must be fed back into the causal instructions and an
effective method already built in to move in a promising new direction.Natural processes are not capable of creating a CIS.
Only sentient, intelligent beings able to identify desired goals can create a CIS.
Theorem 20. Natural processes are not capable of creating a CIS. Only sentient, intelligent beings able to identify desired
goals can create a CIS.
The justification for this is two-fold. First, we notice how easily intelligent beings like humans design a CIS, whereas nothing
resembling a CIS occurs in the abiotic universe. Second, by examining in depth how outcomes are guided, we notice that
the resulting bits of improvement are huge. These are calculated by comparing to a reference state which lacks the CIS,
which ultimately means comparison to random processes, or to those guided by natural law.
Every bit represents a factor of two change in probability, where two scenarios are being compared: that the initial state
migrated into the new one via the natural processes already operating vs via deliberate intervention.
Results from replacing information by CIS
Given the many meanings of information, asking where it comes from is too vague and ignores the full picture the Coded
Information System approach offers. The issues already introduced in the literature 19,20,1 about information are all subsets of
a CIS. For instance, one can always ask what the source of a coding convention is, and coded messages are part of a CIS.
Or what guided a particular message to the specific Receiver. CIS goes beyond what Shannon looked into. For example, the
decision when to send the message is also unique to the CIS approach.
The underlying notions presented in this paper lead to different answers than generally offered about information.
Gitt and others say that multiple copies of an identical message do not provide more information. Once how to accomplish
something has been communicated, extra copies are not considered to offer anything additional. This has been a criticism of
Shannons approach, where if two communication channels transmit the same message, twice as many bits of information
are claimed.However, the effect of a CIS is to reorganize matter and energy for some purpose. Therefore, if a coded
message is used repeatedly in a CIS at different times and locations, then more matter and energy have been organized.
The effect, in bits, is greater from the universes point of view. This means that if there are identical copies of a bacteria, the
effects of each of these CIS would be additive. Is this not reasonable?We prefer this view than to ask where a gene comes
from, and then report the bits from only one copy. Furthermore, reuse of a CIS (including after genetic reproduction) requires
the existence of other complex components, which automatically get neglected if one copy and one event only are reported.
The total effect of multiple copies and reuse give credit naturally to the additional components which make this
possible.Cellular machines can process similar metabolites, using the exact same genes. But in one microenvironment a
nutrient might be present but not in the other. Therefore, the measurable effect of two separate but identical CISs at different
times and places can vary!The earth contains about 6 x 10 27 gm of matter.21 And 12 gm of the isotope carbon-12 contains 6
1023 atoms. The number of entities on Earth is very large, whether we mean atoms or molecules, on the order of roughly
1050.22 Potentially any entity on Earth could be associated with any other: as part of a chemical reaction; as part of a new
object; or to modify the properties of other entities. Merely moving an entity during a second changes about 10 (50)2 = 10100
pairwise distance relationships, and sometimes multiple other properties besides only their spatial relationships. The
organization of all objects on Earth related to living organisms is a vast number, which places great demands on the
organizing effects of the available CISs.Organizing nature on Earth, with its complex ecosystems, means rearranging all this
matter and energy in the face of the unimaginably large number of possible distributions. The CIS model credits contribution
to this effort to the multiple copies and reuses of the message-containing information systems.Gitt considers his theorems
laws of nature. For example, Scientific Law of Information (SLI) 3C states, It is impossible to generate UI without an
intelligent sender.23 The justification seems to be that the claimed SLIs should be considered laws until disproved. This
seems like a weak argument, since there are many statements which reflect all known experience so far and are difficult to
disprove. All facts to date support a claim such as, It is impossible to build a manned station on another solar system, but
is this a law of nature? We certainly agree that UI (Universal Information) cannot arise by natural processes. But by UI we
mean the whole package, which is a CIS. Based on known science, we are persuaded that bringing together all the
components needed by a CIS, at the right time and location, including a coding convention, is never going to happen. But
we believe the CIS justification is sounder, since quantitative and measurable criteria underlie this belief.
A new view of nature
CISs can be embedded hierarchically. A low-level CIS could synthesize an amino acid, which is embedded in a higher CIS
to produce proteins. The system analysis would now include all factors involved in reproducing DNA; 24 decoding DNA;25
regulating location;26 timing;27 and number28 of enzymes (mostly proteins); and formation of the tertiary 29 and quaternary30
protein structures, including bonding to other bio-chemicals. An example of a higher-level CIS, with embedded subsystems,
would be a multi-cellular organism and include all processes to develop into the final, mature state. An example of a still
higher order CIS, with a hierarchy of embedded sub-CISs would be an ecological system, consisting of a variety interacting
species.In Part 3 we provided a figure to help visualize how a series of embedded, refining contributors narrow the range of
behaviour, using a combination of a) coded messages; b) signals; b) preloaded logic processing and knowledge; and d)
engineered components. This is, of course, merely conceptual, and leaves out the exact details used. These four generic
classes of refining contributions can be re-invoked to understand the deeper levels of refinement, level by level.
This analysis offers a new way of looking at the world we live in. Vast quantities of matter and energy have been organized
within hierarchies of dynamic CISs, leading to a cascade of intermediate goals. And our world itself is embedded in higher
CISs as part of ultimate goals.
Conclusion
The CIS model considers the quantitative contribution of all goal-refining components linked by the system. Instead of
asking where information comes from in nature, we propose to ask where Coded Information Systems come from, which
ensures a more complete coverage of all the issues which need to be addressed.
The twenty theorems are based on observation and serve to clarify the key ideas of CIS theory.
Additional examples of CIS are discussed in the on-line appendix to illustrate these principles.31

Genetic code optimisation: Part 1


by Royal Truman and Peter Borger
The genetic code as we find it in naturethe canonical codehas been shown to be highly optimal according to various
criteria. It is commonly believed the genetic code was optimised during the course of an evolutionary process (for various
purposes). We evaluate this claim and find it wanting. We identify difficulties related to the three families of explanations
found in the literature as to how the current 64 21 convention may have arisen through natural processes.
123rf.com/Sergey Sundikov
The order of amino acids in proteins is determined
by information coded on genes. There are over
1.51 1084possible1 genetic codes based on
mapping 64 codons to 20 amino acids and a stop
signal2 (i.e. 64 21). The origin of code-based
genetics is for evolutionists an utter mystery,3 since
this requires a large number of irreducibly complex
machines: ribosomes, RNA and DNA polymerases,
aminoacyl tRNA synthetases (aaRS), release
factors, etc. These machines consist for the most
part of proteins, which poses a paradox: dozens of
unrelated proteins are needed (plus several special
RNA polymers) to process the encoded
information. Without them the genetic code wont
work, but generating such proteins requires that
the code already be functional.This is one of many
examples of chicken-and-egg dilemmas faced by
materialists. Another is the need for a reliable
source of ATP for amino acids to polymerise to
proteins: without the necessary proteins and genes
already in place such ATP molecules wont be
produced. In addition, any genetic replicator needs a reliable feed stock of nucleotides and amino acids, but several of the
metabolic processes used by cells are interlinked. For example, until various amino acid biosynthetic networks are
functional, the nucleotides cant be metabolised. These are some of the reasons we believe natural processes did not
produce the genetic code step-wise. We hope to present a detailed analysis of the minimal components needed for a
genetic code to work in a future paper, but this is not the topic we wish to address here.The literature is full of papers which
claim the universal code4 has evolved over time and is in some sense now far better than earlier, perhaps even near
optimal. We cannot address all the models and claims here, but we hope to present a few thoughts which we hope will show
that these claims are flights of fantasy. No real workable mechanism has yet been offered 1,3 as to how a simpler genetic
system could have increased dramatically in complexity and in robustness towards mutations. If a primitive replicator had
gotten started,contra all chemical logic, would it be possible according to various evolutionary scenarios to refine the system
to generate the 64 codon 20 amino acid + stop signal convention used by the standard genetic code?
Origin of any genetic code
Before an evolutionary process could optimise a code, a replicating lifeform must first exist with some kind of information
processing capabilities. Trevors and Abel published one of the most honest and illuminating papers 3 on the issues which
confront a naturalistic explanation for the origin of life. In particular the origin of an information storing and processing
system, able to guide the synthesis of proteins, is recognized as incomprehensible. In their own words, Thus far, no paper
has provided a plausible mechanism for natural-process algorithm-writing.5 Abel is well known for his attempts to find a
natural origin for the genetic code and naturalistic explanation of the origin of life. He and The Origin-of-Life Foundation, Inc.
have a standing offer of $1 million to anyone providing a plausible natural solution. 6 In stark contrast to the straightforward
honesty offer in this paper3are a large number of Origin-of-Life papers which appeal to no recognizable chemistry and offer
no conceptually feasible path as how to go from their vague notions to extant genetic systems.There are three basic
approaches7 used by materialists to explain the 64 21 mapping of the genetic code: (I) chemical/stereochemical theories,
(II) coevolution of biosynthetically related amino acid pathways and (III) evolution and optimisation by natural selection to
prevent errors. There is a logic to the order in which we present these three approaches. (I) is closest to the question of a
natural origin for a biological replicator. (II) already requires a large number of complex and integrated biochemical networks
to be in place. Attempts to explain the 64 21 code mapping at this level would clearly mean ignoring the question as to
where all these molecular machines and genes came from. (III) Evolutionary hypotheses to explain the 64 21 mapping at
this level would require assuming all 20 amino acids are already present in a genetic code and that most genes already
code for highly optimised proteins.
(I) Chemical/stereochemical theories
All the suggestions in this area assume some kind of simple starting system, being guided by natural chemical processes.
These primitive systems then accumulated vast amounts of complexity and sophistication.Attempts have been made to find
direct chemical interactions between portions of RNA and amino acids.8 These are supposed to have led to the genetic
code. Amino acids might bind preferentially to their cognate codons, 9 anticodons,10 reversed codons,11 codon-anticodon
double helices12 or other chemical structures.After admitting that there is little evidence for selective binding of amino acids
to isolated codons or anticodons, Alberti13 proposed that chains of mRNA would interact with special tRNA chains, and short
peptides would attach specifically to these tRNAs. Being now brought close together, the short peptides would polymerise to
form proteins. A number of cofactors would stabilize the tRNA-mRNA interactions, eventually becoming ribosomes. Another
set of cofactors would decrease the number of amino acids needed to provide a specific interaction with the various tRNA,
which today is done by aaRSs.
Objections. None of the reports in this area reveal any kind of consistent association between codons and the amino acid
expected based on the genetic code.14 The wide variety of chemical systems intelligently conceived in the various scenarios
cannot be justified for free nature conditions, and excessive freedom exists in the interpretation of such models, undermining
the significance of any particular one. 7 Therefore, it is often alleged15 that the original chemical interactions can no longer be
identified through the present coding assignments of the genetic code, but that such putative
interactions may have gotten the process started.16

Figure 1. How the genetic code works. Three specific nucleotides (the anticodon) on a tRNA interact with their cognate
codon on mRNA, thereby adding the correct amino acid to the growing protein chain. The sequence of nucleotide in each
code on the mRNA determines which tRNA will attach, and this communicates the order of amino acids which are to
constitute a protein. Each tRNA is charged by aminoacyl tRNA synthetase, using ATP (not shown).Amino acids created
under abiotic conditions are assumed to have been introduced first in a primitive code. 17 But, since all but glycine come
in d and l mirror-image forms18 such a source of amino acids would lead to chaos. In addition, the 3 chiral C atoms in ribose
in RNA would produce even more stereoisomers in free nature. Furthermore, claiming 17,19 that the amino acids found in the
Miller experiment would have been the first to be used by a genetic code makes a dope 20 out of the reader who accepts this,
since geologists today believe the gases used in such experiments have no relevance to a putative early
atmosphere.18,21,22 Subsequent experiments with more reasonable gas mixtures generated very little organic material and
virtually no amino acids at all.18,23,24At this time, the order in which amino acids are to polymerise is not communicated by the
genetic code through direct amino acid interactions with DNA or RNA polymers. Transfer RNA is used to map codons to their
specific amino acids. Three specific nucleotides (the anticodon) are part of the tRNA molecules, and these interact
transiently with their cognate codons on mRNA. In figure 1 we show how specific codon-anticodon interactions determine
which amino acid is coded for by a mRNA nucleotide triplet. The codon-anticodon interactions must be weak enough to
permit separation once no longer needed, but with sufficient specificity to prevent incorrect binding. But in the absence of
additional machinery such as ribosomes to help hold everything in place, the interactions between codons and the adaptors
anticodon would be too weak to be of any value. At a distant and physicochemically unrelated portion of the tRNA adaptor a
specific amino acid must therefore be attached (with the consumption of a high energy ATP molecule) (figure 1).
How is nature supposed to have gone from an initial system, involving a chemical or a physical interaction of amino
acid i (AAi) (where irepresents version 1, 2, 3 ) with RNA tri-nucleotide i (codoni), to the current scheme based on
adaptor i (adapi)? Two things must now occur simultaneously (see figure 1). One part of a given adaptor number i, adapi,
must replace the original AAi/codoni interaction, and to a second part of adap 1 the same AA; must now be attached (figure
2). These cannot occur sequentially, as both kinds of bonds must occur simultaneously if the primitive code based on direct
interaction is to be retained. Since the spatial relationship with other amino acids is now very different, any putative chemical
reactions with other amino acids can no longer occur. This means all the amino acid to template interactions must be
replaced simultaneously! One cannot have a mixed strategy, since then only part of the putative original polypeptide could
form.

Figure 2. Evolving from direct amino acid-template interaction to an adaptor molecule. Amino acids are claimed 37 to have
originally
interacted
physically with specific triplet
nucleotide
sequences,
forming the ancient basis of
the
genetic
code.
Subsequent insertion of an
adaptor molecule, such as
tRNA, requires anchoring
one end of the adaptor at the
original location of amino
acid interaction, and that
amino acid must now be
covalently bonded at another
portion of the adaptor. Note
that no specific kind of
interaction
(such
as
formation of an ester bond
between template and amino
acid) needs to be claimed, as Figure 3. An adaptor molecule must satisfy various geometric constraints. Amino acids must
long as there is a strong be attached to adaptor molecules (e.g. tRNA i) in a manner which permits peptide bonds to
preference for interaction form. As shown here the amino acids cant react together since the two amino acids are too
between a specific amino part apart. In the absence of a complex molecular machine, such as a ribosome, it is
acid and some unique inconceivable that single adaptor molecules alone could force the reacting amino acids into a
suitable geometry to form the correct kind of chemical bond.
nucleotide acid sequence.
If the ancestral replicator
functioned reliably without an
adaptor, the new system
using
many
specialized Figure 4. Adaptor molecules must fold reliably to bring the reacting amino acids and cognate
adaptor molecules must be codons into the correct geometry. (A) is the approximate shape of folded tRNA molecules.
at
least
as
effective Base-pairs at strategic locations hold the various arms together, permitting recognition by the
immediately, otherwise the aaRS machinery, and reliable anticodon interaction with the cognate mRNA codons. (B) and
former would out-populate (C) represent hypothetical RNA strands which do not fold consistently into reliable structures,
the new evolutionary attempt. or into shapes not suitable for the adaptor.

This means that attachment of AAi to adapi must be highly reliable, as is the case with modern aminoacyl tRNA synthetases.
Among other implications, this requires a reliable source of the different adaptors i=1,2,3 (adap 1) during the lifetime of
this organism and during the subsequent generations. Specifically, all these adaptor sequences must be immediately
metabolized consistently and in large amounts for the new coding scheme to function.The adaptor molecules must satisfy
several structural requirements. The location where amino acid i is attached to its cognate tRNAi must be at an acceptable
distance and geometry to facilitate formation of the peptide bond (figure 3). Each kind of adaptor molecule must fold reliably
into a consistent three-dimensional structure which is able to bring the reacting amino acids and cognate codons into the
correct geometry with respect to each other (figure 4). In tRNAs this is accomplished by strategically located base pairing
and RNA strands of just the right length.Even if two sets of tRNA-amino acid complexes were to be bonded simultaneously
somewhere along the template, these wont form a peptide bond in the absence of the carefully crafted translation
machinery. Unless carefully engineered, the adaptors would tangle together with themselves and with the template triplet
nucleotides (figure 5). Even if these theoretical adaptors could hold two amino acids close enough to react, the endothermic
peptide-forming reaction isnt going to occur spontaneously. Formation of a peptide bond in living organisms is driven by
high-energy ester bonds between amino acids and tRNAs, with the help of aminoacyl tRNA synthetases. Theoretical
adaptors which merely hold the reactants physically close together is not sufficient. Should on rare occasions a peptide
bond actually form, the resulting molecule would probably remain covalently bonded to one of the adaptors (figure 6)
afterwards. One of the design requirements of ribosomes is to move the mRNA along in a ratchet-like manner, detaching the
tRNA whose amino acid has already been used. For this purpose energy is provided by GTP, and a complex scheme is
used to remove the final polypeptide from the mRNA. This requirement has also been overlooked in the conceptual model
presented.

Figure 5. Unless carefully engineered,


evolving adaptors would tangle together
and with the template triplet nucleotides.
RNA, DNA or other sugar template is not
explicitly assumed to permit other
theoretical chemical proposals. HOOC-XiNH2 represent amino acids, where i = 1 toFigure 6. Without a ribosome, dipeptides will rarely form. If a dipeptide should
20, and Xi = CHRi (Ri are the side chains). form, if would remain covalently bonded to one of the adaptors. RNA, DNA or
other sugar template is not explicitly assumed, to permit other theoretical
chemical proposals. HOOC-Xi-NH2 represent amino acids, where i = 1 to 20, and
Xi = CHRi (Ri are the side chains).
If, in spite of the above observations, polypeptides were to start forming; intramolecular reactions, in which the carboxyl end
portion of one amino acid bonds to the amino group of the other amino acid in a growing chain would dominate (figure 7).
This is simply because they are close to each other and would probably react with themselves before other amino acids
show up to extend the chain length. The ribosome machinery is designed to prevent this from occurring.
Figure 7. Unless deliberately constrained, amino
acids undergo intramolecular reactions. The
carboxyl end portion of a growing peptide will
almost always react with the amino group at the
other end to form an intramolecular amide. n
represents two or more amino acids. HOOC-XiNH2 represent
amino
acids,
where
i = 1 to 20, and Xi = CHRi (Ri are the side
chains).
Furthermore, peptide bonds involving the side chains of amino acids can also form, leading to complex and biologically
worthless mixtures. For example, amino groups (-NHR) are present on the side chains of amino acids tryptophan, lysine,
histidine, arginine, asparagine and glutamine and can react with the carboxylic acid (-COOH) groups of other amino acids.
This is especially true if hot conditions are assumed 25 to permit peptide bonds to form. Conversely, some side chains also
have carboxylic acids (aspartate and glutamate), which can form amides with any amino group. The highly complex portions
of the ribosome machinery were designed to prevent such undesirable side reactions from occurring, by holding the
functional groups precisely in place to guide the peptide reactions, and by isolating the functional groups which are not
supposed to react together. This very problem is a real issue with automated peptide synthetic chemistries used today,
requiring complex side-chain blocking strategies in order to allow the correct peptide extension reactions.
Alberti, mentioned above,13 introduced a different scenario: the adaptor is part of the genetic apparatus from very early on.
Basically, one must assume that mRNAs, ribosomes, amino acids and tRNAs all came together long ago with a minimum of
complexity. Then evolution performed a series of unspecified steps approaching the miraculous, resulting in the genetic
code. The initial system somehow added a multitude of molecular tools and was relentlessly fine-tuned. Any other
evolutionary model based on similar premises would resemble closely in many details what he proposes. The necessary
subsequent stages must occur if these assumptions are used. Therefore, it is worthwhile to devote some thought as to
whether the various processes could reasonably occur naturally. Our comments
necessarily apply to other possible variants of the basic thesis.
Figure 8. Co-evolution of tRNA, mRNA and polypeptides is assumed to have
led to the genetic Code. Different peptides are assumed to be able to interact
uniquely with a sequence-specific tRNA, which itself base-pairs at a specific
portion of an mRNA. are alpha helix polypeptides. (A) Sequence-specific

interactions between ancestral t-RNAs and portions of peptides are assumed to have formed, and between these tRNAs
and longer regions of mRNA. (B) Different sequence-specific tRNAs are assumed to attach to portions of mRNA, thereby
bringing their attached amino acids close together. (C) A trans-esterification reaction between tRNA-bound peptides is
assumed to have occurred in the ancestral genetic code. (D)Release of tRNA which no longer has an amino acid attached is
shown, permitting further polymerization. (From Alberti13).
The basic notion is shown in figure 8. In practice we will show that virtually none of the necessary claims in such scenarios
would work. From start to end, chemical and physical realities are abused.
Nature does not produce stereochemically pure polypeptide and polyribonucleotide chains. Therefore, there is no way to
initiate a minimally functional proto-code. First, there is the problem of the source of optically pure 26 starting materials.
Second, in an aqueous solution, a maximum of 810 RNA-mers can polymerise 27 and polypeptide chains would be even
shorter, even after optimizing for temperature, pressure, pH, and concentration of amino acid, plus addition of CuCl 2and
rapidly trapping the polypeptide in a cooling chamber.28,29 The reactants would be extremely dilute, since the thermodynamic
direction
would
be
to
hydrolyse
back
to
starting
materials.
Alternative, non-aqueous environments, such as the side of a dry volcano, would be chemically unpromising. If optically
pure nucleotides and amino acids were present, under dry, hot reaction conditions, then larger molecules would form. But
the result would be gunk or tar, since a complex mixture of three-dimensional non-peptide bonds would form.30
The great majority of random chains of amino acids, even if optically pure, do not conveniently form complex secondary
structures such as helices, as assumed (figure 8). 13 It is certainly true that alpha-helices of specific extant proteins do
interact at precise portions of DNA; but this is neither coincidence nor a universal feature, and is caused by a precisely
tailored set of spatial and electrostatic relationships, designed to serve a regulatory function.
A large collection of mRNAs and tRNAs are needed at the same time and place. And these must provide or transmit the
information to specify protein sequences! Sections of mRNAs must have exact sequences, and the complementary tRNAs
to base-pair with them must already be available. Not only must the sequences be correct, their order with respect to each
other must also be correct. And there must be a large number of such mRNAs, since many different proteins are needed.
With a palette of only four nucleotides (nt) even a miniscule chain of 300 nucleotides offers 4 300, or 4 10180 alternatives
(ignoring all the structural isomers which could also form), the vast majority of which would be worthless. What natural
process then, could have organized or programmed the mRNAs, and created the necessary tRNAs?
This is a fatal flaw in such models. The proportion of random polypeptides based on the 20 amino acids which are able to
fold reliability to offer the chance of producing a useful protein is miniscule, 31,32 to the order of one out of 1050. To provide the
necessary information to generate one of the useful variants, something must organize the order of the bases (A,G,C and T)
in the mRNAs. But nothing is available in nature which organizes the nucleotides into informationally meaningful sequences.
All the various peptides which need to be condensed together must be present. Where did these come from? Alberti writes,
Relatively short peptides (down at least to 17mers) recognize short specific sequences of double-stranded RNA or
DNA.33 The environment of the double strand chain offers far more useful physicochemical patterns to recognize than the
single strand tRNA in the model, and even then, this would represent about one correct sequence out of 10 22 (= 2017). Where
did these peptides come from, and how was generation of the vast majority which are not desired avoided? Note that the
necessary peptides would be of different lengths, depending on what needs to be recognized on a specific tRNA.

Figure 9. Peptides will not always associate at the same location of the same tRNA. Many kinds of interaction between
tRNA and peptides can occur. For example, ester formation using a free OH group in ribose could occur at many alternative
positions. A, B and C illustrate three examples.Whether through ester bonds, weak hydrogen bonds or other interactions,
without specific base-pairing as mediated by nucleotide polymers, all the countless varieties of polypeptides
would notassociate consistently at the same location on a tRNA-like molecule. For example, any free hydroxyl group of
ribose is free to react with the carboxyl group of the peptide, forming an ester. All kinds of van der Waal or hydrogen bond
interactions could also occur (figure 9). Therefore, the location of the peptide will not be reliably determined by any particular
codon of the mRNA template.The mRNA-tRNA interaction alone is not reliable, requiring a considerable number of suitably
located base-pairings between these strands, especially in the absence of any repair machinery, over long regions which is
absurd. There will often be internal single-strand loops (figure 10), on the tRNA and mRNA. This will prevent a single codon
on the mRNA from specifying uniquely and reliably the location of a putative polypeptide attached to the tRNA.
Figure 10. Imperfect base-pairing between the primitive mRNA-tRNA strands would lead to variable
placements of the amino acid associated with the ancestral tRNA. Different ancestral tRNA and
mRNA strands could base pair by chance at various locations. Nature would not accidentally provide
regions of both molecules which just happen to base pair perfectly at the right locations, and simultaneously provide a
region on the tRNA at which the right polypeptide would preferential interact. Imperfect base-pairing and coincidences would
lead to internal loops on tRNA and on mRNA. Any amino acid or polypeptide attached to the tRNA will then show up at
different positions along the templating mRNA. Even if polypeptide chains would form, their sequences would be random,
since nothing resembling a code would exist.It is important to understand what the author is calling tRNA (see figure
8A).13Key to his reasoning is that Sequence-specific interactions between polypeptides and polynucleotides would result in
the accumulation of specific polypeptide-polyribonucleotide pairs.25 Proximity between a peptide and an RNA molecule is
likely to favour the formation of ester bonds between them. 25 The author assumes this results in the ancestral tRNA.
Each such tRNA consists of a specific polypeptide sequence (and not singleamino acids), which is chemically bonded to a
unique single-strand RNA (figure 8B). Multiple tRNAs must then strongly base-pair to a matrix mRNA 25 and be held
rigidly25 at specific locations on the template mRNA. But a new peptide bond can only form between adjacent tRNAs if these
are able to come into contact. This implies they must be attached at the ends of tRNAs, as shown in the original literature
drawings,13 and that the tRNAs must be located close together on the mRNA. Otherwise the ester bond (between the
peptide and RNA to form tRNA) 25 would be buried and be inaccessible to the amino group of the second peptide it is to

bond with. In figure 11 the carboxyl group of tRNA1 is shown inaccessible to the amino group of tRNA2.
To produce the tRNAs the author assumes that portions of alpha-helices, each consisting of different series of amino acids
to provide specificity, would ensure the unique interactions. 25 However, random peptides can fold in an almost infinite
number of ways and will not only form alpha-helices at specific locations (especially if racemic mixtures of amino acids are
used). We must assume that polypeptide chains formed under natural conditions would be almost always amorphous
polymers.Perhaps there is an alternative to having to place proto-tRNAs very close together along the matrix mRNA.
Suppose the locations where the tRNAs:mRNA base-pairings are more flexible, permitting them to eventually come close
enough to react. This would happen when portions of tRNA.mRNA cannot base-pair, forming small bulges. Or if the tRNA
would dissociate from mRNA and find itself in the vicinity of another tRNA it can react with. In other words, where the
reacting
tRNAs
are
actually
located
with
respect
to
the
template
mRNA
would
vary.
However, this would then destroy the notion of the ancient mRNA strand being a true coding template. It would not specify
protein-sequences nor permit eliminating of tRNA-mRNA base-pair interactions (with the help of undefined cofactors to hold
tRNA and mRNA together) converging to the single codon used in the genetic code.In this grand mixture of tRNAs and
mRNAs what is to prevent their cross base-pairing? This would permit all the wrong kinds of peptides to be brought together
where they could also polymerise.As peptide chains lengthen, they will start to fold into three-dimensional structures which
would surround the esterized point of attachment with the tRNA. This would prevent for steric reasons other tRNAs from
attaching in the area on the same mRNA, and the functional groups which are to react from approaching each other.Such a
system has no means of self-replicating. Furthermore, postulating multiple covalent ester bonds implies some kind of
hot, dryenvironment, which is inconsistent with the favoured evolutionary environments presented as candidates for where
life would have arisen.
Figure 11. Ester bonds between peptides and
templating mRNAs would be buried in polypeptide
chains, preventing further polymerization. In ribosomes
the protein chains being formed are held in place such
that the reactive carboxyl (-COOH) and amine (-NH2)
groups can easily react together, no matter how large
the growing protein becomes. This fact is overlooked in
the simplistic model being discussed. As the protein
size grows, the ester bond would become ever more
protected by a mass of amorphous polypeptide. After a
short polypeptide has formed, further polymerization
would be prevented, since the carboxyl and amine
function groups wont come into contact together.Our greatest objection: nothing which needs to be explained has been
seriously addressed. Precisely what are these cofactors which are supposed to permit evolution to real ribosomes and
aaRSs? These machines (ribosomes, aaRSs, etc.) require dozens of precisely crafted proteins, and it would take multiple
miracles to generate precise molecular tools to systematically replace the base-pairings used to link the tRNA and mRNA
strands,13 leaving only the codon-anticodon interactions. This is how modern ribosomes supposedly eventually arose. Note
that in the earlier evolutionary stages a huge number of unique base-pairings were postulated, which permitted
unambiguous association of each ancestral tRNA with a precise portion of an mRNA. In the model, these base-pairings are
systematically eliminated but the specificity (i.e. which tRNA attaches to which portion of an mRNA) must not be lost.
Concurrently, other undefined evolving cofactors are responsible to eventually link a single amino acid to the correct tRNA,
as modern aaRSs do. Is this feasible? According to the model, 13 initially a multitude of different polypeptides (with 17 or
more residues)13 each bonded to a specific RNA, leading to an ancient tRNA. (By tRNA the author actually means an
ancient charged tRNA which uses a polypeptide and not a single amino acid). Twenty amino acids at seventeen positions
leads to 2017 = 1.3 1022 possible tRNAs plus many others having longer or shorter attached polypeptides. The carboxyl
and amino ends of these large polypeptides then bond to form the primitive proteins (figure 11). The author does not explain
how a tiny fraction of the more than 10 22 alternatives were selected, nor does he consider whether a miniscule subset used
would
suffice
to
provide
the
minimal
biological
needs
based
on
such
crude
proteins.
In the modern code, every residue of each protein is coded for, which permits any sequence of residues to be produced.
The proposed ancient code, however, would only be able to code for individual large, discrete amino acid blocks.
Alberti believes that shorter and shorter polypeptide chains would eventually be needed to identify the correct RNA they
must bond to. This process must culminate in true aaRSs, which charge a single amino acid to a specific RNA strand (i.e.
real tRNAs). (Recall that initially longer polypeptides, which form alpha-helices, would be required to permit specific
identification of the RNA they are to form an ester bond with). The author has provided no details which justify the claim that
unguided
nature
could
produce
this
effect
with
cofactors
or
any
other
natural
method.
But yet another fundamental point has been overlooked. It is assumed that originally discrete blocks of polypeptide bonded
together, providing the necessary proteins. Amino acids are now being eliminated, leading to shorter blocks. As the
polypeptides attached to the RNA strands shorten, different sequences would bond to the same RNA strand as before,
producing an evolving code in which each tRNA would be charged with different polypeptides. It is not obvious why
modification of an individual block by eliminating amino acids would still lead to acceptable primitive proteins. And evolving
all the blocks would lead to utter chaos. The exact same mRNA would now produce vastly different protein versions.As
cofactors are introduced between proto-tRNAs and mRNAs, and between peptides and tRNAs, the spatial relationships
permitting earlier bonding of peptides together will be destroyed.Instead of continuing with these kinds of vague chemical
hypotheses, it seems more sensible for evolutionists to avail themselves of any chemical materials they wish (knowing full
well they were of biological origin) and to show in a laboratory something specific and workable. If intelligently organizing all
the components in any manner desired (besides simply reproducing an existing genetic system) cant be made to work, then
under natural conditions with > 99.999% contamination, UV light and almost infinite dilution, a code-based replicator is
simply not going to arise.
(II) Coevolution of biosynthetically related amino acid pathways
In this view, the present code reflects a historical development. New, similar amino acids would evolve over time from
existing synthesis pathways and be assigned to similar codons. Several researchers claim 34 that biosynthetically related
amino acids often have codons which differ by only a single nucleotide. It is also claimed 35 that the class II synthetases are
more ancient than class I, and so the ten amino acids served by class II would have arisen earlier in the development of the
genetic code.
Objections. We cannot provide a thorough analysis of this hypothesis here. The argument is weakened considerably,
however, by the fact that many amino acids are interconvertible. Even randomly generated codes show similar associations
between amino acids which are biosynthetically related, 34 and it is not at all clear which amino acids are to be considered

biosynthetically related.36Nature would have to experiment with many possible codes and have created many new
biochemical networks to provide new amino acids to test. This would require novel genes. Nature cannot look ahead and
sacrifice for the future, so each of the multitudes of intermediate exploratory steps cannot require deleterious stages. This
poses impossible challenges to what chance plus natural selection could accomplish. We discussed the notion of testing
different codes elsewhere.1If only a subset of amino acids were used in an earlier life form, the necessary evidence should
be available. The highly conserved proteins, presumed to be of very ancient origin, should demonstrate a strong usage of
the originally restricted amino acid set. This expectation is especially true if the extant sequences demonstrate little
variability at the same residue positions. Furthermore, the first biosynthetic pathway could only have been built with proteins
based on the amino acids available at that time. The residue compositions of members from both ancient and more modern
pathways could be compared to see if a bias exists.Is it unreasonable to demand this kind of supporting evidence? Suppose
someone reported that the proteins used by the class II synthetases machinery relied on only the amino acids produced
thereby. Every evolutionist alive would use this as final and conclusive proof for the theory. Then why should one be
reluctant to make such a prediction? Without looking at the data yet, we predict this will not be the case.
(III) Evolution and optimisation to prevent errors
Some have proposed3739 that genetic codes evolved either to minimize errors during translation of mRNA into protein, or the
severity of the outcome40,41 which results. A similar proposal40,42 is that the effects of amino acid substitution through
mutations are to be minimized by decreasing the chances of this occurring and the severity of the outcome should they
occur. It would be desirable if random mutations would merely introduce residues with similar physicochemical
properties.43,44Amino acids can be characterized by at least 134 different physicochemical properties, 45 begging the question
as to which property or cluster of properties are most important. For example, measures of amino acid volumes seem less
important than polarity criteria.46 In addition, C G mutations tend to be more frequent than A U mutations, 47 for which
an optimised genetic coding convention would need to take into account. Transition mutations 48 tend to occur more
frequently than transversion mutations.48 During translation (and DNA replication), transitional errors are most likely, since
mistaking a purine for the other purine or a pyrimidine for the other one is, for stereochemical reasons, more
likely.Therefore, the best genetic codes would provide redundancy such that the most likely translation errors or mutations
would result in the same amino acid very often. Freeland and Hurst49 took this into account when comparing with a computer
a million randomly generated codes having the same pattern of codon assignments to different amino acids as the standard
code. Using a measure of hydrophobicity as the only key attribute to be protected by a coding convention (and taking
nucleotide mutational bias into account) they found only one code out of a million which by the hydrophobicity criterion
alone, would be better. We are convinced that taking more factors to be optimised into account would reveal this proportion
to be much smaller.Hydrophobicity reflects the tendency of amino acids to avoid contact with water and to be present in the
buried inner core of folded proteins. Unfortunately, no best measure of hydrophobicity for amino acids has been agreed
upon, and at least 43 different laboratory test methods have been suggested. 50 The different criteria often lead to very
different ranking of amino acid hydrophobicity.50Others have thought that mutability played an important role: robustness was
important for conservation of some proteins but mutability was required to permit evolution also. 51 Still others have focused
on overall effects of mutations on protein surface interactions with solvent52which lead to protein secondary features such as
alpha helices and beta sheets.53Having the option of using different codons to code for the same amino acid can be
advantageous. For example, if a low concentration of the protein is desired, synonymous codons can be used which lead to
slower translation54,55 by taking advantage of the fact that the corresponding aaRSs are often present in very different
proportions. If a specific tRNA is only present in a low concentration, the target codon must wait much longer to be
translated than if the tRNA is highly available. Sharp et al. reported56 that highly-expressed genes indeed preferentially use
those codons which lead to faster translation. This is realized by maintaining different concentrations of the corresponding
aaRSs. Translation of an mRNA can be slowed down if a rare codon being translated by a ribosome needs to wait until the
appropriate charged tRNA stumbles into that location, for example to give time for a portion already translated to initiate
folding.57It is not obvious which property or properties of amino acids should be conserved in the presence of mutations.
One suggestion by Freeland and colleagues 16 is to use point accepted mutations (PAM) 74100 matrix data. Comparing
aligned versions of genes which have been mutating (in organisms presumably sharing a common ancestor) after about 100
million years would presumably reveal which amino acid substitutions are more variable or on the other hand, more
intolerant to substitution. The authors then examined whether the assignment of synonymous codons protected against
such changes, and concluded58 the universal genetic code achieves between 96% and 100% optimisation relative to the
best possible code configuration.
Mechanisms for codon swapping. There are various scenarios1 as to how codons could begin to code for a different
amino acid. According to the Osawa-Jukes model59 mutations cause some codons to disappear from the genome, and the
relevant tRNA genes, being superfluous, disappear. At this point these genomes would not have all 64 of the possible
codons present in protein-coding regions. This process is thought to be caused by a mutational bias leading to higher A-T or
G-C genome content. When later this mutational bias reverses, the missing codons would begin to show up somewhere in
the genome. These could no longer by translated, since the corresponding tRNA is lacking. But duplication of a gene for
tRNA followed by mutations at the anticodon position might permit recognition of the new codon on the mRNAs, which would
now translate for a different amino acid.The Schultz-Yarus60 model is similar but permits the codon to remain partially
present in the genome. Mutations on a duplicated tRNA produces a different anticodon or a new amino acid charging
specificity and thereby ambiguous translation of a codon (i.e. the same codon could be identified by different tRNAs).
Natural selection would then optimise a particular combination. Incidentally, in some Candida species CUG will encode
either serine or leucine,60 depending on the circumstances.
Objections. We have discussed various difficulties with the notion of trial-and-error attempts to find better coding
conventions elsewhere.1There are over 1.5 1084 codes which could map 64 codons to 20 amino acids plus at least one
stop signal.1 This is a huge search space, and most of the alternatives would have to be rejected. But when would nature
know a better or worse coding convention is being explored? Several stages are needed.Many genes would have to be
functionally close to optimal so that natural selection could identify when random mutations would produce inferior versions.
This means that an unfathomably large number of mutational trials would be needed to produce many optimal genes.
Interference with a mutating genetic code would hinder natural selections efforts.One or more codons would have to be
recoded and the effects throughout the whole genome ascertained. During this process many codons would be ambiguous,
such that a myriad of protein variants would be generated by almost all genes, in the same individual. Natural selection
would be faced with a continuously changing evaluation as to whether the evolving codon would be advantageous.One
evolving coding convention needs to be completed, before another one can be initiated. For example, if during the interval
when 70% of the time a codon leads to amino acid a and 30% of the time to b additional codons were to also become
ambiguous, cellular chaos would result. Besides, we see nowhere in nature examples of a multitude of ambiguous codons
present simultaneously in an organism.Generating a new code demands removing the means of producing the original

coding option. Depending on the mechanism of code evolution, this could mean removing duplicate tRNA or aaRS variants
throughout the whole population. This is going to be near impossible since the selective advantage would be minimal, and at
best would consume a huge amount of a key evolutionary resource, time.Nature cant know in advance which coding
convention would eventually be an improvement. An initial 0.1% ambiguity in a single codon, which may be limited to a
single gene (such as the case of specific chemical modifications of mRNA), is hardly going to be recognized by natural
selection. Note that this 0.1% alternative amino acid would be distributed randomly across all copies of this codon on a
gene, and the resulting proteins would be present in multiple copies. The alternative residue would be present in only a
small minority of these proteins, and randomly.Once a new code has been fixed, this limits the direction future evolutionary
attempts can take. There is no mechanism in place to allow a return to a previous code once it was abandoned other than to
re-evolve back to that system. Given the large number of unrelated factors which determine prokaryote survival from the
external environment and quality of the genetic system, natural selection would not be provided with any consistent
guidance. The rules would change constantly. And a multitude of criteria need to be taken into account simultaneously in
deciding what to do with each codon. Codons can be used by several codes not related to specifying amino acids, 61 and the
relative importance of the tradeoffs will change constantly.
Discussion
We believe the genetic apparatus was designed, and agree there must be a logical reason for the codon amino acid
mapping chosen. We suspect that protection against the effects of mutations is indeed one of the factors which went into the
choice made. This would require foreknowledge of all the kinds of genes needed by all organisms and a weighting of the
damage each kind of amino acid substitution could cause. Optimal design may also require variants of the code to be used
for some of the intended organisms. But we wish to emphasize that the code to determine amino acid order in proteins is
not the whole story. Many other codes6163 are superimposed on the same genes and noncoding regions, and must also be
taken into account in the design of the code. Various nucleotide patterns are used for DNA regulatory and structural
purposes. DNA must provide information for many other processes besides specifying protein sequences. These
requirements affect which code would be universally optimal.Interestingly, a design theoretician may well make a similar
suggestion to that of Freeland and colleagues16 mentioned above, but based on other reasoning. To a first approximation,
the optimal design of the same proteins in different organisms would be similar. For various reasons, occasionally
substituting an amino acid would be better. For example, in hot environments the proteins may have to fold more tightly,
whereas this design could prevent enzymatic activity under cooler conditions, by embedding a reactive site too deeply in a
rigid hydrophobic code. In general, optimal protein variants must often use residues with similar properties, such as
hydrophobicity or size, at a given position. The genes would not be similar due to common descent but by design
requirements. Mutations would subsequently generate less than optimal variants which would still be good enough.An
intelligently planned genetic code would have taken this into account. Therefore, to a first approximation, comparing aligned
genes and determining substitutability patterns would indeed provide useful information as to amino acid requirements and
use of alternatives. If enough taxa living in many environments are used as a dataset, we should be able to obtain a good
idea as to the amount of variability homologous proteins would have. Of course noise, in the form of random mutations, will
also be present. Knowledge of other superimposed codes not responsible for coding for protein sequences, would permit
even better quantification as to how optimal the standard code really is. Various alternative codes must satisfy many design
requirements, and the optimal one will do best for all demands placed on it.There is however one key difference in the
reasoning. We propose that the designer knew what the ideal protein sequences should be, and therefore which needed
protection from mutations, and all the other roles nucleotide sequences need to play. The evolutionist here has a problem.
Fine-tuning hundreds or thousands of genes concurrently via natural selection to produce a near optimal ensemble is
absurd. During the time when the regulation of biochemical networks and enzymes are being optimised, the rules in the
form of the code would also be changing. Yet a Last Universal Common Ancestor (LUCA) supposedly already had
thousands of genes64 and the full set of tRNA synthetases and tRNAs 7about 2.5 billion years ago.65 Actually, other lines of
reasoning66 have led to the belief that the genetic code is almost as old as our planet. In other words, it had virtually no time
to evolve and yet is near optimal in the face of over 1.5 10 84 alternative 64 21 coding conventions.We see evidence
everywhere of cellular machinery designed to identify, ameliorate and correct errors. In sexually reproducing we observe
that genes are duplicated, which mitigate the effects of many deleterious mutations and thereby help organisms retain
morphologic function. Many evolutionists now propose nature has attempted to conserve complex functionality from
degradation. All this implies that a highly optimal state has been achieved which nature is trying to retain. More consistent
with evolutionary thought would be proposals which encourage evolvability or adaptation. Evolution from simple to specified
complexity is not achieved by hindering change.
Summary
A key element in evolutionary theory is that life has gone from simple to complex. But requiring the minimal components of a
genetic code to be simultaneously in place without intelligent guidance is indistinguishable from demanding a miracle. No
empirical evidence motivated searches for simpler or less optimal primitive genetic codes. Once the possibility of Divine
activity has been excluded as the causal factor, an almost unquestioning willingness to accept absurd notions is created
among many scientists. After all, it must have happened!
We conclude that no one has proposed a workable naturalistic model that shows how a genetic code could evolve from a
simpler into a more complex version.

Evidence for the design of life: part 1Genetic redundancy


by Peer Terborg
Knockout strategies have demonstrated that the function of many genes cannot be studied by disrupting them in model
organisms because the inactivation of these genes does not lead to a phenotypic effect. For living systems, this peculiar
phenomenon of genetic redundancy seems to be the rule rather than the exception. Genetic redundancy is now defined
as the situation in which the disruption of a gene is selectively neutral. Biology shows us that 1) two or more genes in an
organism can often substitute for each other, 2) some genes are just there in a silent state. Inactivation of such redundant
genes does not jeopardize the individuals reproductive success and has no effect on survival of the species. Genetic
redundancy is the big surprise of modern biology. Because there is no association between redundant genes and genetic
duplications, and because redundant genes do not mutate faster than essential genes, redundancy therefore brings down
more than one pillar of contemporary evolutionary thinking.

Figure 1. To create a mouse knockout for a


particular gene, a selectable marker is
integrated in the gene of interest in an
embryonic stem cell. The marker disrupts
(knocks out) the gene of interest. The
manipulated embryonic stem cell is then
injected into a mouse oocyte and transplanted
back into the uterus of pseudo-pregnant
mouse. Offspring carrying the interrupted
gene can be sorted out by screening for the
presence of the selection marker. It is now
fairly easy to obtain animals in which both
copies are interrupted through selective
breeding. Mendels law of independent
segregation assures that crossbreeding
littermates will produce individuals that lack
both genes.The discovery of the primary rules
governing biology in the second half of the
20th century paved the way for a more
fundamental understanding of the complexity
of life. One of the spin-offs of this knowledge
has been the development of sophisticated
techniques to elucidate the function of proteins. When molecular biologists want to know the function of a particular human
protein they genetically modify a laboratory mouse so that it lacks the corresponding gene (for the laboratory procedure see
figure 1). Mice that have both alleles of a gene interrupted cannot produce the corresponding proteinthey are
called knockouts. Theoretically, the phenotype of a mouse lacking specific genetic information could provide essential
information about the function of the gene. Over the years, thousands of knockouts have been generated. The knockoutstrategy has helped elucidate the functions of hundreds of genes and has contributed immensely to our biological
knowledge. However, there has been one unexpected surprisethe no-phenotype knockout. This is unexpected, because
according to the Darwinian paradigm, all genes should have a selectable advantage. Hence, knockouts should have
measurable, detectable phenotypes. The no-phenotype knockouts demonstrate that genes can be disrupted withoutor
with only minordetectable effects on the phenotype. Many genes seem to have no measurable function! This is known
as genetic redundancy and it is one of the big surprises of modern biology.
Molecular switches
One of the most intriguing examples of genetic redundancy is found in the SRC gene family. This family comprises a group
of eight genes that code for eight distinct proteins all with a function that is technically known as tyrosine kinase. SRC
proteins attach phosphate groups to other proteins that contain the amino acid tyrosine in a specific amino acid context. The
result of this attachment is that the protein becomes activated; it is switched on, and can hence pass down information in a
signalling cascade. Four closely related members of the family are named SRC, YES, FYN and FGR, and the other related
members are known as BLK, HCK, LCK and LYN. Both families are so-called nuclear receptors, and transmit signals from
the exterior of the cell to the nucleus, the operation centre where the information present in the genes is transcribed into
messenger RNA. The proteins of the SRC gene family operate as molecular switches that regulate growth and
differentiation of cells. When a cell is triggered to proliferate, tyrosine kinase proteins are transiently switched on, and then
immediately switched off.The SRC gene family is among the most notorious genes known to man, since they cause cancer
as a consequence of single point mutations. A point mutation is a change in a DNA sequence that alters only one single
nucleotidea DNA letterof the entire gene. When the point mutation is not on a silent position, it will cause the organisms
protein-making machines to incorporate a wrong amino acid. The consequence of the point mutation is that the organism
now produces a protein that cannot be switched off. Mutated SRC genes are of particular danger because they will
permanently activate signalling cascades that induce cell proliferation: the signal that tells cells to divide is permanently
switched on. The result is uncontrolled proliferation of cellscancer. The growth-promoting point mutations cannot be
overcome by allelic compensation because a normal protein cannot help to switch off the mutated protein.Despite the SRC
protein being expressed in many tissues and cell types, mice in which the SRC gene has been knocked out are still viable.
The only obvious characteristic of the knockout is the absence of two front teeth due to osteoporosis. In contrast, there are
essentially no point mutations allowed in the SRC protein without severe phenotypic consequences. Amino acid changing
point mutations in most, presumably all, of the SRC genes can lead to uncontrolled cellular replication. 1 Knockout mice
models have been generated to reveal the functions of all the members of the SRC gene family. Four out of eight knockouts
did not have a detectable phenotype. Despite their cancer-inducing properties, half of the SRCgenes appear to be
redundant. Standard evolutionary theory tells us that redundant gene family members originated through gene duplications.
Duplicated genes are truly redundant and as such they are expected to reduce to a single functional copy over time through
the accumulation of mutations that damage the duplicated genes. Such mutations can be frame-shift mutations that
introduce premature stop signals, which are recognized by the cellular translation-machines to terminate protein synthesis.
The existence of the SRC gene family has been explained as follows:
In the redundant gene family of SRC-like proteins, many, perhaps almost all point mutations that damage the protein also
cause deleterious phenotypes and kill the organism. The genetic redundancy cannot decay away through the accumulation
of point mutations.1This scenario implies that the SRC genes are destined to reside in the genome forever. Point mutations
that immediately kill raise an intriguing origin question. If the SRC genes are really so potently harmful that point mutations
induce cancer, how could this extended gene family come into existence through gene duplication and diversify through
mutations in the first place? After the first duplication, neither of the genes is allowed to change because it will invoke a
lethal phenotype and kill the organism through cancer. Amino acid changing mutations in the SRC genes will permanently
be selected against. The same holds true for the third, fourth and additional gene duplication. New gene copies are only
allowed to mutate at neutral sites that do not replace amino acid in the protein. Otherwise the organism will die from
tumours. Because of this purifying selection mechanism, the duplicates should remain as they are. Yet the proteins of
the SRC family are distinctly different, only sharing 6080% of their sequences.
Redundancythe rule not the exception

In 1964, a knockout cross-country skier won two gold medals during the Winter Olympics in Innsbruck. In true Olympic
tradition, Eero Maentyrantas 15 and 30 km success was surrounded by controversy. Tests showed that he had 15% more
red blood cells than normal subjects and Eero was accused of using doping to increase his level of red blood cells. Yet no
trace of blood doping could be found. In 1964 nobody knew, but modern biology showed Maentyranta had a
mutated EPO gene, which codes for erythropoietin, a messenger protein that tells the bone marrow to increase the
production of red blood cells. To increase red blood levels, EPO binds to the EPO receptor that generates two opposite
signals: one to instruct bone marrow cells to become red blood cells (the on-switch) and one to reduce production of red
blood cells (the off-switch). This auto-regulatory mechanism assures a balanced production of red blood cells. In 1993, it
turned out that the Olympic medallist had a mutation that knocked out the off-switch.2 The EPO receptor of the Finnish
athlete generated a normal activation signal, but not the deactivating one. People can do well without the off-switch.In
humans, the muscle-fiber-producing ACTN3 gene can also be missed entirely and without consequences for
fitness.3 Humans can also do without the GULO gene,4 the gene coding for caspase 12,5 the CCR5gene6 and some of
the GST genes that are involved in the detoxification of polycyclic aromatic hydrocarbons present in cigarette smoke. 7 All
these genes can be found inactivated in entire human populations (GULO, caspase 12) or subpopulations thereof. The
Douc Langur (Pygathrix nemaeus), an Asian leaf-eating Colobine monkey, is the natural no-phenotype knockout for
the angiogenin gene that codes a small protein that stimulates the formation of blood vessels. 8 Bacterial genomes can be
reduced by over 9% without selective disadvantages on minimal medium, 9 and mice in which 3 megabases of conserved
DNA was erased showed no signs of reduced survival and there was no indication of overt pathology. 10 Fewer than 2% of
approximately 200 Arabidopsis thaliana (Mouse-Ear Cress) knockouts displayed significant phenotypic alterations. Many of
the knockouts did not affect plant morphology even in the presence of severe physiological defects.11 In the nematode
worm Caenorhabditis elegans a surprising 89% of single-copy and 96% of duplicate genes show no detectable phenotypic
effect when they are knocked out.12 Prion proteins are thought to have a function in learning processes, but when they are
misfolded they can cause bovine spongiform encephalitis (BSE) or KreutzfeldJacob disease. In order to make BSE
resistant cows, a knockout breed has been created lacking the prion protein. A thorough health assessment of this knockout
breed revealed only small differences from wild-type animals. Apparently, cows can thrive very well without the prion
protein.13 Research on histone H1 genes, once believed to be indispensable for DNA condensation, suggest that any
individual H1 subtype is not necessary for mouse development, and that loss of even two subtypes is tolerated if a normal
H1-to-nucleosome stoichiometry is maintained.14 Even complete highly specialized cells can be redundant. A strain of
laboratory mouse, named WBB6F1, lacks a specific type of blood cells known as mast cells. The reported no-phenotype
knockouts are probably only the tip of the iceberg. As reported in Nature below, few knockout organisms in which no
phenotype could be traced ever see the light of day: a lot of those things [no-phenotype knockouts] you dont hear about.
No-phenotype knockouts are negative results, and as such they are usually not reported in scientific journals; because they
do not have news value. To address the problem, the journal Molecular and Cellular Biology has since 1999 a section given
over to knockout and other mutant mice that seem perfectly normal. 15So how are genes, cells and organisms supposed to
have evolved without selective constraints? If organisms can do without complete cells, it would be outlandish to assert that
natural selection was the driving force shaping those cells. Two decades of knockout experiments has made it clear that
genetic redundancy is a major characteristic of all studied life forms.
Paradigm lost
Genetic redundancy falsifies several evolutionary hypotheses. Firstly, truly redundant genes are impossible paradoxes
because natural selection cannot prevent the accumulation of harmful mutations in these genes. Hence, natural selection
cannot prevent redundancies from being lost. Secondly, redundant genes do not evolve (mutate) any faster than essential
genes. If protein evolution is due in large part to neutral and slightly deleterious amino acid substitutions, then the incidence
of such mutations should be greater in proteins that contribute less to individual reproductive success. The rationale for this
prediction is that non-essential proteins should be subject to weaker purifying selection and should accumulate mildly
deleterious substitutions more rapidly. This argument, which was presented over twenty years ago, is fundamental to many
theoretical applications of evolutionary theory, but despite intense scientific scrutiny the prediction has not been confirmed.
In contrast, a systematic analysis of mouse genes has shown that essential genes do not evolve more slowly than nonessential ones.16 Likewise, E. coli proteins that operate in huge redundant networks can tolerate just as many mutations as
unique single-copy proteins,17 and scientists comparing the human and chimpanzee genomes found that non-functional
pseudogenes, which can be considered as redundancies, have similar percentages of nucleotide substitutions as do
essential protein-coding genes.18 Thirdly, as discussed in more detail below, several recent biology studies have provided
evidence that genetic redundancy is not associated with gene duplications.
What does the evolutionary paradigm say?
An important question that needs to be addressed iscan we understand genetic redundancy from Darwins natural
selection perspective? How can genetic redundancy be maintained in the genome without natural selection acting upon it
continually? How did organisms evolve genes that are not subject to natural selection? First, lets look at how it is thought
genetic redundancies arise. Susumo Ohnos influential 1970 book, Evolution by Gene Duplication, deals with this
idea.19 Sometimes, during cell divisions, a gene or longer stretch of biological information is duplicated. If duplication occurs
in germ line cells and become inheritable, the exact same gene may be present twofold in the genome of the offspringa
genetic back-up. Ohno argues that gene and genome duplications are the principal forces that drive the increasing
complexity of Darwinian evolution, referring to the evolution from microbes to microbiologists. He proposes that duplications
of genetic material provide genetic redundancies which are then free to accumulate mutations and adopt novel biological
functions. Duplicated DNA elements are not subject to natural selection and are free to transform into novel genes. With
time, he argues, a duplicated gene will diverge with respect to expression characteristics or function due to accumulated
(point) mutations in the regulatory and coding segments of the duplicate. Duplicates transforming into novel genes with a
selective advantage will certainly be favored by natural selection. Meanwhile, the genetic redundancy will protect old
functions as new ones arise, hence reducing the lethality of mutations. Ohno estimates that for every novel gene to arise
through duplication, about ten redundant copies must join the ranks of functionless DNA base sequence.20 Diversification of
duplicated genetic material is now the accepted standard evolutionary idea on how genomes gain useful information. Ohnos
idea of evolution through duplication also provides an explanation for the no-phenotype knockouts: if genes duplicate fairly
often, it is then reasonable to expect some level of redundancy in most genomes, because duplicates provide an organism
with back-up genes. As long as duplicates do not change too much, they may substitute for each other. If one is lost, or
inactivated, the other one takes over. Hence, Ohnos theory predicts an association between genetic redundancy and gene
duplication.
The evolutionary paradigm is wrong

Figure 2. A very simple scheme of a small robust network


comprised of AE, where several nodes are redundant.
Some biologists have looked into this matter specifically using
the wealth of genetic data available for Saccharomyces
cerevisiaethe common bakers yeast. A surprising 60%
ofSaccharomyces genes could be inactivated without producing
a phenotype. In 1999, Winzeler and co-workers reported
in Science that only 9% of the non-essential genes
ofSaccharomyces have sequence similarities with other genes
present in the yeasts genome and could thus be the result of
duplication
events.21 Most
redundant
genes
ofSaccharomyces are not related to genes in the yeasts
genome, which suggests that genetic duplications cannot explain
genetic redundancy. In 2000, Andreas Wagner confirmed
Winzelers original findings that weak or no-effect (i.e. nonessential and redundant) genes are no more likely to have
paralogousthat is, duplicatedgenes within the yeast genome
than genes that do result in a defined phenotype when they are
knocked out. Wagner concluded that the robustness of mutant
strains cannot be caused by gene duplication and redundancy, but is more likely due to the interactions between unrelated
genes.22 More recent studies have shown that cooperating networks of unrelated genes contribute significantly more to
robustness than gene copy number.23 Redundant genes are proposed to have originated in gene duplications, but we do not
find a link between genetic redundancy, and duplicated genes in the genomes. Gene duplication is not a major contributor to
genetic redundancy, and the robust genetic networks found in organisms cannot be explained. The predicted association
between genetic redundancy and gene duplication is non-existent. Ohnos interesting idea of evolution by gene duplication
therefore cannot be right.
The non-linearity of biology
The no-phenotype knockouts can only be explained by taking into account the non-linearity of biochemical systems. It is
ironic that standard wall charts of biochemical reactions show hundreds of coupled reactions working together in networks,
while graduate students are tacitly encouraged to think in terms of linear cause and effect. The linear cause-and-effect
thinking in ancient Greek philosophy was adopted by nineteenth century European scholars, and is still dominating most
fields of science, including biology. We cannot understand that genetic redundancy and biological robustness in linear terms
of single causality, where A causes B causes C causes D causes E. Biological systems do not work like that. Biological
systems are designed as redundant scale-free networks. In a scale-free network the distribution of node linkage follows a
power law in that it contains many nodes with a low number of links, few nodes with many links and very few nodes with a
high number of links. A scale-free network is very much like the Golden Orbs web: individual nodes are not essential for
letting the system function as a whole. The internet is another example of a robust scale-free network: the major part of the
websites makes only a few links, a lesser fraction make an intermediate number of links, and a minor part makes the
majority of links. Usually hundreds of routers routinely malfunction on the Internet at any moment, but the network rarely
suffers major disruptions. As many as 80% of randomly selected Internet routers can fail, but the remaining ones will still
form a compact cluster in which there will still be a path between any two nodes. 24 Likewise, we rarely notice the
consequences of thousands of errors that routinely occur in our cells.
Scale free networks
Genes never operate alone but in redundant scale-free networks with an incredible level of buffering capacity. In a simple
non-linear biological systempresented in figure 2with nodes A through E, A may cause B, but A also causes D
independent of B and C. This very simple network of only five nodes demonstrates robustness due to redundancy of B and
C. If A fails to make the link with D, there are still B and C to make the connection. Extended networks composed of
hundreds of interconnected proteins ensure that if one network becomes inactivated by a mutation, essential pathways will
then not be shut down immediately. A network of cooperating proteins that can substitute for or bypass each others
functions makes a biological system robust. It is hard to imagine how selection acts on individual nodes of a scale-free,
redundant system. Complex engineered systems rely on scale-free networks that can incorporate small failures in order to
prevent larger failures. In a sense, cooperating scale-free networks provide systems with an anti-chaos module which is
required for stability and strength. Scale-free genetic and protein networks are an intrinsic, engineered characteristic of
genomes and may explain why genetic redundancy is so widespread among organisms. Genetic networks usually serve to
stabilize and fine-tune the complex regulatory mechanisms of living systems. They control homeostasis, regulate the
maintenance of genomes and provide regulatory feedback on gene expression. An overlap in the functions of proteins also
ensures that a cell does not have to respond with only on or off in a particular biochemical process, but instead may
operate somewhere in between.Most genes in the human genome are involved in regulatory networks that detect and
process information in order to keep the cell informed about its environment. The proteins operating in these networks come
as large gene families with overlapping functions. In a cascade of activation and deactivation of signalling proteins, external
messages are transported to the nucleus with information about what is going on outside so it can respond adequately. If
one of the interactions disappears, this will not immediately disturb the balance of life. The buffering capacity present in
redundant genetic networks also provides the robustness that allows living systems to propagate in time. In a linear system,
one detrimental mutation would immediately disable the system as a whole: the strength of a chain is determined by its
weakest link. Interacting biological networks, where parallel and converging links independently convey the same or similar
information, almost never fail. The Golden Orbs web only crumbles when an entire spoke is obliterated in a crash with a
Dragonfly, an event that will hardly ever happen. Biological systems operate as a spiders web: many interacting and
interwoven nodes produce robust genetic networks and are responsible for genetic redundancy.23
Conclusion
Genetic redundancy is an amazing property of genomes and has only recently become evident as a result of negative
knockout experiments. Protein-coding genes and highly conserved regions can be eliminated from the genome of model
organisms without a detectable effect on fitness. There is no association between redundant genes and gene duplications,
and redundant genes do not mutate faster than essential genes. Genetic redundancy stands as an unequivocal challenge to
the standard evolutionary paradigm, as it questions the importance of Darwins selection mechanism as a major force in the
evolution of genes. It is also important to realize that redundant genes cannot have resided in the genome for millions of
years, because natural selection, a conservative force, cannot prevent their destruction due to debilitating mutations.
Mainstream biologists who are educated in the Darwinian framework are unable to understand the existence of genes

without natural selection. This is clear from a statement in Nature a few years ago by Mario Cappecchi, a pioneer in the
development of knockout technology:I dont believe that there is a single [knockout] mouse that does not have a phenotype.
We just arent asking the right questions.15The right question to be asked is: is the evolutionary paradigm wrong? My
answer is yes, it is. Current naturalistic theories do not explain what scientists observe in the genomes. Genetic redundancy
is the actual key to help us understand the robustness of organisms and also their built-in flexibility to rapidly adapt to
different environments. In part 2 of this series of articles, I will explain genetic redundancy in the context of baranomes, the
multipurpose genomes baramins were originally designed with in order to rapidly spread to all the corners and crevices of
the earth.
Evidence for the design of life: part 2Baranomes
by Peer Terborg
The major difference between the evolution and creation paradigms is that the evolutionist believes that the natural variation
found in populations can explain microbe-to-man evolution via natural selection (Darwinism), while the creationist believes it
cannot. This is because the evolutionary, naturalistic framework requires something creationists hold impossible: a
continuous addition of novel genetic information unrelated to that already existing. In the creation paradigm neither variation
nor selection is denied; what is rejected is that the two add up to explain the origin of species. In part 1, I discussed genetic
redundancy and how redundant genes are not associated with genetic duplications and do not mutate faster than essential
genes. These observations are sufficient to completely overturn the current evolutionary paradigm and could form the basis
for a novel creationist framework to help us understand genomes, variation and speciation. In this second part, I argue and
provide biological evidence that life on Earth thrived due to frontloaded baranomespluripotent, undifferentiated genomes
with an intrinsic ability for rapid adaptation and speciation.
Where redundancy leads
The canonical view is that most variation in organisms is the result of different versions of genes (alleles) and genetic
losses. The variation Mendel studied in peas and that led him to discover several basic inheritance laws, was the result of
different alleles. At least, so it is taught. One of the seven traits Mendel described in peas was what he called the I locusit
referred to the colour of the seeds. In Mendels jargon, I stood for dominance (yellow), whereas i meant recessive (green).
Plants carrying I had yellow seeds, plants lacking I had green seeds. Mendel shed scientific light on inheritance.
Now, 140 years after Mendels findings, we
know how the yellow-green system works at
the molecular level. The colour is determined
by the stay-green gene (abbreviated: STG)
that codes for a protein involved in the reabsorption of green pigments during
senescence.1The recessive trait i is the
mutated form of the STG gene; an inactive
variant that cannot re-absorb pigments, so
the seeds keep their green colour.Is
the STG gene essential for survival? Most
likely it is not. Molecular biology shows
Mendel studied the effects of non-essential
and redundant genes. Dominance means at
least one redundant or non-essential gene is
functional; recessive means both copies of
redundant and non-essential genes are
defunct. In Bacillus subtilis only 270 of the
4,100
genes
are
essential,2and
in Escherichia coli this is a meagre 303 out
of a total of almost 4,300 genes.3 Genetic
redundancy is present everywhere,4 and this
lead me to believe that biology is quite
different from what Darwinians think it is.
Namely, organisms are full of genetic tools
that are handy but not essential for survival,
and selection cannot be involved in shaping these genes. Apparently, genomes are loaded with genetic elements that reside
in the genome without selective constraints. This makes sense in the creation paradigm, because the genomes we observe
today are remnants of the original genomes in the created kinds. And, apparently, they were created as
pluripotent,5 undifferentiated genomes with an intrinsic ability for rapid adaptation and speciation. I have called the
undifferentiated, uncommitted, multipurpose genome of a created kind a baranome.6Baranomes explain genetic
redundancy: there is no association with gene duplication, and redundant genes do not mutate faster than essential genes.4

The lack of understanding of baranomes recently led to a severe misinterpretation of the origin of genes in the secular
literature. Eager to find evidence for the evolution of novel biological information, a novel de novo protein-coding gene
in Saccharomyces cerevisiae was reported on the basis of genome comparison among several species ofSaccharomyces.
The BSC4 gene had an open reading frame (ORF) encoding a 132-amino-acid-long polypeptide. It was reported that there
is no homologous ORF in all the sequenced genomes of other fungal species, including closely related species such as S.
paradoxus and S. mikatae. The sequences presented in the figure above demonstrate, however, that the BSC4 gene can be
found interrupted and inactivated in S. paradoxus, S. mikatae andS. bayanus. These data confirm the baranome hypothesis,
which holds that all Saccharomyces descended from one original undifferentiated genome (Saccharomyces bn) containing
all information currently found in the isolated species. This alleged novel gene is in fact ancient frontloaded information that
became redundant and inactive in most S. spp but was subject to sufficient constraints to be retained in S.
cerevisiae. BSC4 codes for a protein involved in DNA repair; an elaborate and integrated mechanism involving dozens of
redundant systems. Therefore, it is predicted that BSC4 knockouts of S. cerevisiae will not show major problems. The top
part of the figure shows the alignment of 320 base pairs of the orthologous sequences of BSC4 from Saccharomyces
bayanus (S.bay), S. mikatae(S.mik), S. paradoxus (S.par) and S. cerevisiae (S.cer). The conserved nucleotides are shown
in bold. (Adapted from Cai and Zhao et al.34). The bottom of the figure shows how only S. cerevisiae retained an
active BSC4 gene.
The multiple genomes of Arabidopsis
In 2007, Science reported on the genome of Arabidopsis thaliana, a flowering plant of the mustard family with a small
genome that is suitable for extensive genetic studies.7 This report was of particular interest because it showed the genomes
of 19 individual plants collected from 19 different stands, ranging from sub-arctic regions to the tropics. According to a
commentary summarizing the results of this painstaking analysis about four percent of the reference genome
either looks very different in the wild varieties, or cannot be found at all. Almost every tenth gene was so defective that it
could not fulfill its normal function anymore!Results such as these raise fundamental questions. For one, they qualify the
value of the model genomes sequenced so far. There isnt such a thing as the genome of a species, says Weigel. He adds
The insight that the DNA sequence of a single individual is by far not sufficient to understand the genetic potential of a
species also fuels current efforts in human genetics.Still, it is surprising that Arabidopsis has such a plastic genome. In
contrast to the genome of humans or many crop plants such as corn, that of Arabidopsis is very much streamlined, and its
size is less than a twentieth of that of humans or corneven though it has about the same number of genes. In contrast to
these other genomes, there are few repeats or seemingly irrelevant filler sequences. That even in a minimal genome every
tenth gene is dispensable has been a great surprise, admits Weigel [emphases added].8Among the
19 stands of Arabidopsis we find dramatic genetic differences. We observe genetic losses as well as genetic novelties.
Although the dispensability of genes is easy to understand with respect to genetic redundancy, the observed novelties are
much harder to conceive unless we accept that all observed novelties are not novelties at all but genetic tools that have
resided in the genome since the dayArabidopsis was created. The genetic novelties may simply reflect environmental
constraints that have helped preserve these genetic tools.There is indeed no such thing as the genome of a species,
because what we observe today are rearranged and adapted genomes that were all derived from an original genome that
contained all genetic tools we find scattered throughout the population. The great surprise is only a great surprise with
respect to the Darwinian paradigm. With a pluripotent Arabidopsis genome in mind, the data are not surprising at all. It is in
accord with what we might expect from the perspective of a rapid (re)population of the earth.Modern Arabidopsis genomes
look as if they were derived from much larger genomes containing an excess of genetic elementsboth coding and noncoding (repetitive) sequencesthat can easily be lost, shuffled or duplicated. The dispensable genes outlined above can
be understood as genetic redundancies originally present in the baranome that over time slowly but steadily fell apart in the

19 individuals, because the environment did not select for them. The study strongly suggests that isolated stands of plants
originated as a result of loss of genetic redundancies, duplication and rearrangement of genetic elements. The dispensability
of 10% of the genes of Arabidopsis could have been predicted because most of the genes still present in individual
genomes are redundant.9 In my opinion, these observations strongly favour the baranome hypothesis.
The law of natural preservation
Genetic redundancy, dispensable genes and disintegrating genomes are scientific novelties revealed to us by modern
biology. How can we understand all this? Darwinians hypothesize that life evolved from simple unicellular microbes to
multicellular ones via a gradual build-up of biological information, the driving force supposedly being natural selection.
According to biology, there is no gradual accumulation of information; biology originated from a big bang. Sponges, worms,
plants and man all have approximately the same genetic content, so the number of genes does not seem to be related to
the complexity of organisms.10 In addition, the complex organisms we observe today were not derived from a single or a few
simple organisms, but must have derived from a global community of organisms.11 The observations of modern biology pose
so many untenable hurdles for naturalistic philosophy that it would be better to simply leave Darwinism for what it is: a set of
falsified 19th century hypotheses that do not and cannot explain the origin of species.The way to understand variation and
speciation is through disintegration and rearrangement of primordialbaranomes created with an excess of genetic elements.
Baranomes initially contained all mechanisms required to quickly respond and adapt to changing environments. They
provided organisms with the tools needed to invade many distinct niches, and were ideal for the swift colonization of all
corners of the world. Baranomes were multifunctional genomes which can be compared to a Swiss army knife. A Swiss knife
contains many functions which are not immediately necessary in a particular environment; but some of them are extremely
handy in the mountains, others in the woods, still others are made for opening bottles and cans, or tools for making a fire.
Depending on where you are, you may require different sets of functions. Similarly, depending on where the organism lives,
it demands different functions (i.e. protein-coding genes and their protein products) from its genome. The environment then
determines what part of the non-essential genome is under constraint and it is only this part that will be conserved. In other
words, the law of natural preservationconventionally coined natural selectiondetermines the differentiation of the
pluripotent genome.12

Figure 1. Phylogeny of modern Flaveria species demonstrates independent losses of the C3 and C4 photosystems from the
baranome of Yellowtops species (Flaveria bn). Some have either the C3 or the C4 photosystems, others have both C3 and
C4 (or parts there of). Isolated species are in the process of losing redundant parts of theFlaveria baranome. (Adapted from
Kutchera and Niklas14).
C3 and C4 plants
From the creation paradigm, we might expect to find more than one carbon-fixation system. For example a system that
functions optimally in warm, tropical regions, also systems that operate at sub-arctic temperatures. We find that plants do
indeed have two photosystems for fixing carbon fixation; they are known as C3 and C4. The optimum temperature for
carbon fixation in C3 plants is between 1520C, whereas the C4 plants have an optimum around 3040C.13 Today many
plants are either C3 plants or C4 plants, but we also find plants that have both C3 and C4. There is clear indication for
redundancy of the two photo-systems, because many plants have only one of both systems operable; either C3 or
C4, plus remnants of the other system.For instance, in modern Yellow tops (Flaveria spp) we not only see functional C3, C4
or the combination of C3 plus C4 photo-systems, but we also observe C4 remnant in C3 plants. 14 The presence of remnants
of one of the systems qualifies as evidence for a baranome containing both photo-systems, and indicates that the C4
system is not stringently preserved when the C3 system is also present (figure 1). The two frontloaded photosystems
ensure a rapid colonization of both high and low altitudes, and hot and cold environments. In the tropics, the C4 system,
which functions optimally at high temperatures, should be active, whereas the C3 system is redundant. Here, the hot
system would be under permanent environmental constraint and be conserved. Due to accumulation of debilitating
mutations in the genetic elements comprising cold systems, these would rapidly disintegrate. A genetic program designed
for tropical regions does not make sense in arctic regions, and vice versa. It is the organisms environment or habitat that
determines whether genetic elements are useful or not. Conforming to the baranome hypothesis, the habitat determines
genetic redundancy. There is no biological reason to assume why unused, habitat-induced redundancies should be
preserved. The law of natural preservation tells us that unused genes will rapidly degrade.

What baranomes contain


Baranomes are information carriers. They were frontloaded with three classes of DNA elements: essential, non-essential
and redundant. When essential elements mutate to change the amino acid sequences, the information carrier as a whole is
immediately subject to a severe reproductive disadvantage. In the worst case the mutation is incompatible with life and
mutated essential DNA elements will not be present in the gene pool of the next generation. Essential DNA elements can be
defined as biological information that is unable to evolve. Non-essential genes are genes that are allowed to mutate and
may thus contribute to allelic variations. As they produce non-lethal phenotypes, they contribute to the variation observed in
populations. Classic Mendelian genetics is largely due to variation in non-essential genes. Variation in non-essential genes
is what geneticists call alleles. Recessive Mendelian traits can usually be attributed to dysfunctional non-essential genetic
elements; in particular elements that determine expression of morphogenesis programs, including those that determine
length and shapethe morphometryof the organism. To induce the recessive trait the disrupted (or inactivated) alleles
must be inherited from both parents, because an active wild-type gene usually compensates for an inactivated gene. In
Mendels jargon, this compensation is known asdominance.The third class of frontloaded genetic elements are correct
genes that underlie genetic redundancy. They make up a special class of non-essential genes and have only recently been
discovered. That is because their existence cannot be deduced from genetic experiments: they do not contribute to a
detectable phenotype. Their peculiarity is that redundant genes may be completely lost from the genome without any effect
on reproductive success. That redundant genetic elements make up a major part of the genome of all organisms became
evident when biologists interested in gene function developed gene-knockout strategies; with the remarkable observation
that many knockouts do not have a phenotype. 4 Genetic redundancy is an intrinsic property of pluripotent baranomes. It
should be noted, however, that the environment also plays a crucial role in determining whether a genetic algorithm is
redundant, non-essential or essential. The pathway for vitamin C synthesis, for instance, is a diet-induced genetic
redundancy which is inactive in humans, four primates, guinea pigs and fruit-eating bats as a result of two debilitating
mutations.15The law of natural preservation often dictates the course for the development of baranomes. In addition,
baranomes initially contained variation-inducing genetic elements (VIGEs) that helped to induce rapid duplications and
rearrangements of genetic information. Modern genomes of all organisms are virtually littered with VIGEs (which are usually
referred to as remnants of retroviruses; LINEs, SINEs, Alus, transposons, insertion sequences, etc.) and due to their ability
to duplicate and more genetic material they facilitate and induce variation in genomes.16
Speciation from baranomes
Variation in reproducing populations is mostly due to position effects of VIGEs. That is because the presence of VIGEs in or
near genes determines the activity of those genes and hence their expression. 17 Variation is a result of a change in gene
expression. In addition, VIGEs that function as chromosome swappers may also help us understand reproductive barriers. A
reproductive barrier between organisms is in fact another term for speciation, the formation of novel species. Species is
meant here in the sense of Ernst Mayrs species concept, which includes intrinsic reproductive isolation.18 Indeed, the gene
swapping mechanism present in primordial pluripotent genomes also allowed for intrinsic reproductive isolation. If we want
to understand how chromosome-swapping VIGEs are involved in speciation, we first have to look into some details of sexual
reproduction.In all cells of sexually reproducing organisms the chromosomes are present as homologous pairs. One is
inherited from the father and the other from the mother. The arrangement of homologous chromosomes allows them
to easily pair up. Each parental chromosomes recognizes the other and they easily align. The alignment is necessary for the
formation of gametes during meiosis, where the two sets of parent chromosomes are reduced to one set. Differences in
chromosome pattern impede the pairing of chromosomes at meiosis, resulting in hybrid sterility. Chromosomal
rearrangements may be one of the most common forms of reproductive isolation, allowing rapid adaptive radiation of
multipurpose genomes without the need for geographic isolation or natural selection. The activity of chromosome-swapping
VIGEs may thus have produced reproductive barriers and hence facilitating speciation.If it is true that chromosomal order
determines whether organisms are able to reproduce, speciation can theoretically be reversed by chromosomal
adjustments. In other words, we must be able to produce viable offspring from two reproductively isolated species just by
rearranging their chromosomes. This may sound like an untestable hypothesis, but experimental evidence demonstrates
that it is indeed possible to unspeciate distinct, reproductively isolated, species by chromosomal adjustments. Using Mayrs
species definition, yeasts of the genus Saccharomyces comprise six well-defined species, including the well known bakers
yeast.19 The Saccharomyces species will readily mate with one another, indicating that they stem from one single baranome
(figure 2), but pairings between distinct species produce sterile hybrids. Three of the six species are characterized by a
specific genome rearrangement, known as reciprocal chromosomal translocations. Reciprocal chromosome translocation
occurs when the arms of two distinct chromosomes are exchanged. Analysis of the six species revealed that translocations
between the chromosomes do not correlate with the groups sequence-based phylogeny, a finding that has been interpreted
as translocations do not drive the process of speciation. However, a study carried out by the Institute of Food Research in
Norwich, United Kingdom, showed that the chromosomal rearrangements in Saccharomyces do indeed induce reproductive
isolation between these organisms.19 The reported experiments were designed to engineer the genome of Saccharomyces
cerevisiae (bakers
yeast) so as to make it
collinear
with
that
of Saccharomyces
mikatae, which normally
differs
from
bakers
yeast by one or two
translocations.
The
results showed that the
constructed strains with
imposed
genomic
collinearity allow the
generation of hybrids
that produced a large
proportion of spores that
were
viable.
Viable
spores
were
also
obtained in crosses
between
wild-type

bakers yeast and the naturally collinear speciesSaccharomyces paradoxus, but not in crosses between species with noncollinear chromosomes.
Figure 2. Left panel: Adaptive radiation from one single pluripotent baranome. The figure shows a hypothetical model for the
radiation of the Saccharomyces bn into the six Saccharomyces species we observe today. Initially, the uncommitted
pluripotent baranome radiated in all possible directions. Due to intrinsic mechanisms, variation is constantly generated but
slowed down over time because of the redundant character of the variation-inducing genetic elements (and were easily lost).
Speciation may occur when a reproductive barrier is thrown up, for instance as the result of chromosomal rearrangements.
Genetic elements that facilitate variation are specified in the genome and there is no need for the millions of years that are
required for Darwinian evolution. This is clear from the long-running (20 years) evolutionary experiments which show that
the major adaptive changes occurred during the first 2 years. 35 Right panel: Hypothetical time courses for the total amount of
information in a baranome (black line) and the number of species derived from that baranome (red line). Over time there is a
tendency to lose biological information with an increase of the number of species.This is empirical proof that a reproductive
barrier between species can be reversed just by reconfiguration of their chromosomes. In addition to reciprocal
chromosomal translocations, many small-scale genomic rearrangements, involving the amplification and transposition of
VIGEs may cause reproductive isolation. VIGEs are thus basic to understanding variation and speciation of baranomes.
Modern biology demonstrates that although the six species of Saccharomyces yeasts are all derived from one single
baranome, their individual karyotypes20 determine whether they can interbreed and leave offspring.That the karyotype is an
important determinant of reproductive isolation is also observed in deer. Eight species of Asian deer of the
genusMuntiacus inhabit an area speading from the high mountains in the Himalay to the low land forests of Laos and
Cambodia. Their karyotypes differ dramatically; chromosome number varies from a low of only three pairs to a high of
23.21 The muntjack species demonstrate that individuals that differ substantially by chromosomal reorganizations of
otherwise identical genetic material will invariably be sterile. The sterility of muntjack hybrids is exclusively due to the
inability of the chromosomes to pair. The chromosomes of distinct species simply cannot form pairs, and formation of viable
reproductive cells is impossible. The karyotype accounts for reproductive isolation, and the baranome hypothesis leaves
room for speciation events through adaptive radiation.
Identification of baranomes
How do we identify whether organisms descended from one primordial multipurpose genome? Darwinians claim a
continuum between genomes of distinct species and view all modern species as transition stages, so this question is not of
particular interest. For micro-organisms this may well be true. Between bacteria, the exchange of biological information is
common and for this purpose they possess elaborate mechanisms to facilitate the uptake of foreign DNA from the
environment. Still, over 5,000 distinct bacteria have been scientifically described, indicating distinctive borders between
bacterial species.22Likewise, the biological facts in higher organisms these are distinctive borders between genomes;
borders determined by reproductive barriers. For instance, humans and chimps both have comparable genomic content, but
very distinctive karyotypes, so the species cannot reproduce with each other. Therefore, the question raised above is not
easy to answer. Because genomes tend to continuously lose unused genetic information over time, genomic content may
not be suitable to identify common descent from the same primordial baranome.A first indication that two distinct species
have descended from the same baranome is this ability to mate. The offspring does not have to be fertile; neither does it
have to be viable at birth. Zygote formation is a significant indication that the organisms were derived from the same
baranome.The best tool for baranome identification currently available, however, may be indicator genes. Indicator genes
are essential genes with a highly specific marker. In the human baranome (Homo bn) we indeed observe indicator genes,
such as FOXP223 and HAR1F.24 Both genes are also present in primates, but in humans they have highly specific
characteristics not found in primates, indicating that human genomes stem from a distinct baranome. Specific characteristics
typify humans. A comparative analysis of indicator genes in primates is sufficient to discriminate between man and
chimpanzee (Pan bn) baranomes, or whether ancient bones belong to the human baranome. Recent research shows that
indicator genes may indeed be a promising tool for baranome detection. Analyses of ancient Neandertal DNA, revealed
typical human FOXP2 characteristics.25 This observation is compelling evidence that both modern humans and Neandertals
originate from one and the same baranome. Further research is required to develope a full range of baranome indicator
genes for other organisms.
Darwin revisited
From the baranome hypothesis we can begin to understand how Africas Rift Valley lakes became populated with hundreds
of species ofCichlids within a mere few thousand years. We can also understand the origin of dozens of (sub) species of
woodpeckers, crows, finches, ducks and deer. And we begin to see how wings could develop many times over in stick
insects.26 We also understand why two distinct sex systems operate in Japanese wrinkled frogs (Rana rugosa),27 and
why Dictyostelium has genetic programs for both sexual and asexual reproduction. 28 And we begin to see why ancient
trilobites radiated so rapidly.29 The required genetic programs (Dictyosteliums sexual reproduction program make up over
2,000 genes!) did not have to evolve step-by-step under the guidance of natural selection. Rather, these programs were a
dormant frontloaded part of the baranome and only required a wake-up call. If Darwin had the knowledge of 21 st century
biology, I believe his primary conclusion would be similar to what I propose: limited common descent through adaptive
radiation from pluripotent and undifferentiated baranomes. The limits to common descent are determined by the elements
that have been frontloaded intothat baranome. Natural varieties of sexually reproducing organisms can be established by
means of differential reproductive success, but reversal to wild types follows as soon as selective constraints are relieved
and hybridization between previously isolated populations occurs. Hybridization is, in fact, nothing but reversal to a more
original multipurpose genome; the wild type is the more stable (robust) form of the baranome because it contains more
redundancies. Reversal to the wild-type was a known principle in Darwins day, but Darwin dismissed the obvious and
invented his own naturalistic biology:To admit this view [of unstable created species] is, as it seems to me, to reject a real
for an unreal, or at least for an unknown, cause. Darwin rejected the baranome for theological, or at best for philosophical,
reasons. Why would a flexible, highly adaptable, pluripotent genome present in primordial creatures make the the work of
the designer mere mockery and deception? A pluripotent genome with an intrinsic propensity to rapidly respond to
changing situations elegantly explains the co-adaptations of organic beings to each other and to their physical conditions.
Organisms that cannot adapt or, in other words, organisms that lose their evolvability are bound to become extinct. The
baranome hypothesis with frontloaded VIGEs is sufficient to explain what Darwin observed and there is no need to invoke a
gradual and selection-mediated evolution from microbe-to-man, which is non-existing and not in accord with scientific
observations anyway. In every generation, VIGE activity generates novel genetic contexts for pre-existing information and
hence gives rise to novel variation. VIGEs were an intrinsic property of baranomes, and they are the source of variation and
adaptive radiation. It must be emphasized that, because all elements that induce variation are already in the genome, there
is no need for the millions of years required for Darwinian evolution.
Conclusion and perspective

The findings of modern biology show that life is quite different from that predicted by the evolutionary paradigm. Although
the evolutionaryparadigm assumes an increase of genetic information over time, the scientific data show that an excess of
biological information is present even in the simplest life forms, and that we instead observe genetic losses. A
straightforward conclusion therefore should be that life on Earth thrived due to frontloaded baranomespluripotent,
undifferentiated genomes with an intrinsic ability for rapid adaptation and speciation. Baranomes are genomes that
contained an excess of genes and variation-inducing genetic elements, and the law of natural preservation shaped
individual populations of genomes according to what part of the baranome was used in a particular environment.With so
many genomes sequenced and an ever increasing knowledge of molecular biology, we will find more and more evidence to
support thebaranome hypothesis. We will increasingly recognize traces and hints of frontloaded information still present in
the genomes of modern species. We may expect that the genomes of the descendants of the same multipurpose genome
independently lost redundant genetic elements. We may expect to find impoverished genomes and also reproductively
isolated populations at different latitudes to be highly distinct with respect to their genomic content. We may even be able to
piece together the genomic content of the original multipurpose genome of these species simply by adding up all unique
genetic elements present in the entire population.Finally, it will be possible to detect indicator genes, such as the FOXP2,
which may become genetic tools for establishing the borders between distinct baranomes. Frontloaded baranomes are an
important tool to help us understand biology. I believe there is grandeur in this view of life, where the Great Omnipotent
Designer chose to breathe life into a limited number of undifferentiated, uncommitted, pluripotent baranomes; and from
these baranomes all of the earth was covered with an almost endless variety of the most beautiful and wonderful
creatures.31
Karyotype rearrangements
In 1970, Neil Todd developed the karyotypic fission hypothesis (KFH)32 to correlate the physical appearance of chromosomes
with the evolutionary history of mammals. Todd postulated wholesale fission of all medio-centric chromosomes. Todds fasttrack, single event, genome rearrangement still is the most parsimonious theory to account for mammalian karyotypes and
potentially explains rapid speciation events. Todds was rejected mainly because it postulated something opposing the
dominant Darwinian paradigm.In 1999, Robin Kolnicki revived Todds KFH. Although her kinetochore reproduction
hypothesis33 was largely theoretical, each step had a known cellular or molecular mechanism. During DNA replication, just
before meiotic synapsis and sister chromatid segregation, the formation of an extra kinetochore on all chromosomes is
facilitated. The kinetochore is the organizing centre that holds the sister chromatids together during meioses and is
composed mainly of repetitive DNA sequences. The freshly added kinetochores do not disrupt the distribution of
chromosomes to daughter cells during meiosis because tension-sensitive checkpoints operate to prevent errors in
chromosome segregation. The result is a new cell with twice the number of telocentric chromosomes. 10The duplication of the
kinetochores on many chromosomes at the same time is highly unlikely in a naturalistic model, but the telocentric
chromosomes of rhinoceros, rock wallaby and many other species are physical evidence that their genomes were formed
instantly.I postulate that the genomes, as we observe them today, are the result of thousands of years of rearrangements
(fission, fusion and duplications) brought about by specific variation-inducing genetic elements (VIGEs). Initially, well
controlled rearrangements may have been facilitated by these elements, but over time the control over regulated genome
rearrangement deteriorated. VIGEs may be the genetic basis to help us understand wholesale genomic rearrangements from
pluripotent baranomes.In order to rapidly occupy novel niches, a mechanism or ability to create reproductive barriers may
have been intrinsic to baranomes. The ability to adapt, including speciation-events, is merely due to neutral rearrangements
of chromosomes, and the VIGEs involved may easily become inactive because of the permanent accumulation of debilitating
mutations. The remnants of VIGEs can still be found in contemporary genomes; they are known as (retro)(trans)posons,
LINEs, SINEs, Alu insertion sequences, etc. Some VIGEs may have started a life of their own and now jump around more or
less uncontrolled.

The design of life: part 3an introduction to variation-inducing genetic elements


by Peer Terborg
The inheritance of traits is determined by genes: long stretches of DNA that are passed down from generation to generation.
Usually, genes consist of a coding part and a non-coding regulatory part. The coding part of the gene determines the
functional output, whereas the non-coding portion contains switches and units that determine when, where and how much of
the functional output should be generated. Point-mutations in the coding part are predominantly neutral or slightly
detrimental genetic noise that accumulates in the genome, whereas point-mutations in the regulatory part of DNA units can
induce variation with respect to the amount of output. Previously, in part 2, I argued that created kinds were frontloaded with
baranomes: that is, pluripotent genomes with an ability to induce variation from within. The output of (morpho)genetic
algorithms present in the baranome can readily be modulated by variation-inducing genetic elements (VIGEs). VIGEs are
frontloaded genetic elements normally referred to as endogenous retroviruses, insertion sequences, LINEs, SINEs, microsatellites, transposons, insertion sequences, and the like. In the present report, these transposable and repetitive DNA
sequences are redefined as VIGEs, which solves the RNA virus paradox.
The (morpho)genetic algorithms were designed in such way that VIGEs
easily integrated into it and became a part of it, hence making the program
explicit.
The variation that Darwin saw in pigeons can be explained with the
activation or deactivation of existing genetic sequences for feather
production in different parts of the body. This gives no basis for asserting
that pigeons could change into something which is not a pigeon.In order to
fight off invading bugs and parasites, higher organisms have an elaborate
mechanism that induces variation in immunological defence systems. One
particular type of immune cells (B cells) produces defence proteins known
as immunoglobulins. Immunoglobulins are very sticky; they bind to
intruders as biological tags and mark them as alien. Other cells of the
immune system then recognize the intruder, and a destruction cascade is
activated. To have a tag available for every possible alien intruder, millions
of B cells have their own highly specific gene for immunoglobulin
production. In the genome there is only limited storage space for biological
information, so how can there be millions of genes? Well, there arent.

Immunoglobulin genes are assembled from several pre-existing DNA sequences that can be independently put together.
The part of the immunoglobulin that does the alien recognition contains several domains which are each highly variable.
Every single B cell forms a unique immunoglobulin gene by picking from several short pre-existing DNA sequences. We also
observe that later generations of immunoglobulins are more specific than the earlier generations, in the sense that they bind
more tightly to invading microorganisms. Binding affinity to an invader is equivalent to recognition of that invader. And the
better the immune system is able to recognize an intruder, the better it is able to clear it. The increased specificity is due to
somatic mutations deliberately introduced in the genes of the immunoglobulins. A mechanism to rapidly induce mutations in
immunoglobulin genes is present in the B cell genome. This mechanism ensures that the recognition pattern specified by
the genes becomes increasingly specific for the intruder. This ability to recognize and defeat all potential microorganisms is
characteristic of the immune systems of higher organisms, including humans. The genomes contain all the necessary
biological information required to induce variation from within. A flexible genome is required to effectively ward off diseases
and parasitic infections. B cells dont wait for mutations to happen; they generate the necessary mutations themselves.
Darwin revisited
Previously, in part 2,1 I argued that organisms are equipped with flexible, highly adaptable, pluripotent, multipurpose
genomes. Organisms are able to conquer the world through adaptive radiation of baranomes. But how do baranomes
unleash information? Do organisms have to wait for selectable mutations to occur in order to rapidly invade and occupy
novel ecological niches? Or were the baranomes of created kinds equipped with mechanisms to rapidly induce mutations,
similar to the variation generated by B cells? Lets turn to Darwins The Origin of Species, where we will find some clues.
Darwin wrote quite extensively on variation, and in particular on the variation of feather patterns in pigeons:
Box 1. Common names of some well-known
variation-inducing genetic elements (VIGEs) in
prokaryotes (bacteria) and eukaryotes (yeast,
plants, insects and mammals).
Some facts in regard to the colouring of
pigeons well deserve consideration. The rockpigeon is of a slaty-blue, and has a white rump
(the Indian sub-species, C. intermedia of
Strickland, having it bluish); the tail has a
terminal dark bar, with the bases of the outer
feathers externally edged with white; the
wings have two black bars; some semidomestic breeds and some apparently truly
wild breeds have, besides the two black bars,
the wings chequered with black. These several marks do not occur together in any other species of the whole family. Now, in
every one of the domestic breeds, taking thoroughly well-bred birds, all the above marks, even to the white edging of the
outer tail-feathers, sometimes concur perfectly developed. Moreover, when two birds belonging to two distinct breeds are
crossed, neither of which is blue or has any of the above specified marks, the mongrel offspring are very apt suddenly to
acquire these characters; for instance, I crossed some uniformly white fantails with some uniformly black barbs, and they
produced mottled brown and black birds; these I again crossed together, and one grandchild of the pure white fantail and
pure black barb was of as beautiful a blue colour, with the white rump, double black wing-bar, and barred and white-edged
tail-feathers, as any wild rock pigeon! We can understand these facts, on the well-known principle of reversion to the
ancestral characters, if all the domestic breeds have descended from the rock-pigeon. 2Darwin arguesand correctly so
that all domestic pigeon breeds have descended from the rock-pigeon. He even knew, as demonstrated above, how to
breed the rock-pigeon from several distinct pigeon races following a breeding pattern. Darwin describes a breeding
algorithmfor pigeons, to obtain the ancestor to all pigeons! But does he also describe an algorithm for breeding turkeys from
pigeons? No. Darwin doesnt know such an algorithm. If he had found an algorithm for breeding ducks or magpies from
pigeon genomes, he would have had solid evidence in favour of his proposal On The Origin of Species Through the
Preservation of Favoured Races. His breeding experiments led him to discover the principle of reversion to ancestral
characters, but contrary to common Darwinian wisdom, it is also the falsifying observation to his proposal for the origin of
species. The observation that pigeons bring forth pigeons, and nothing else but pigeons, is not exactly the evidence needed
to argue for the common descent of all birds. On the contrary! Darwins breeding experiments demonstrated that a pigeon is
a pigeon is a pigeon. Characteristics and traits within single species of pigeons may vary tremendously, but he always
started and ended with pigeons. Breeding experiments have always shown, without exception, that novel and distinct bird
species do not arrive through artificial selection. Even Darwin argues that there is no doubt that all varieties of ducks and
rabbits have descended from the common wild duck and rabbit. 3 From the variation Darwin observed in wild and
domesticated populations, it does not follow that rabbits and ducks have some hypothetical common ancestor in a fuzzy
distant past. Darwin observed inborn, innate variation that already existed in the genomes of the pigeons and it only had to
be activated or expressed.From the excerpt above, we may even get an impression of how it works. A genetic algorithm for
making feathers (a feather program) is part of the pigeons genome and is present in every single cell. The feather program
is present in billions of pigeon cells, but it is NOT active in all those cells. Feathers are only formed when the program is
activated. The feather program is silent in cells where it should normally not operate. Activation of the feather program in the
wrong cells may often be incompatible with life, but sometimes it may produce pigeons with (reversed) feathers on the feet.
The program may be derepressed or activated through a mechanism that operates in the pigeons genome. Whether
feathers appear on the feet or on the head, and whether they appear normal or reversed is merely a matter of activation and
regulation of the feather program. But Darwin didnt know about silent genomic programs or how they could become active.
He didnt know about gene regulation and molecular switches. Darwin did not know anything about genes and genomes.
Analogous variation
The idea that Darwin had been working on for over two decades prior to the publication of Origin, his ide fixe, was how
organic change (i.e. variation) present in populations might explain how novel species came into being. Unchanging, stable
species is not what Darwin had in mind. He pondered the riddles of variation; he thought about laws and principles
associated with the process of variation and believed he could disclose them by the study of the formation of new breeds.
Drawing from what he knew about pigeon breeding and equine varieties, Darwin describes some of his ideas about the
laws of variation in chapter five of Origin:Distinct species present analogous variations; and a variety of one species often
assumes some of the characters of an allied species, or reverts to some of the characters of an early progenitor. These
propositions will be most readily understood by looking to our domestic races. The most distinct breeds of pigeons, in
countries most widely apart, present sub-varieties with reversed feathers on the head and feathers on the feet, characters
not possessed by the aboriginal rock-pigeon; these then are analogous variations in two or more distinct races. 4Darwin

describes that the exact same traits can appear in distinct breeds of pigeons andimportantlythese traits
appeared independently in countries most widely apart. If several breeds arrive with the same characteristics
independently, it is unlikely they do so because of chance. Rather, the pigeon genomes may activate or derepress the same
feather program independently. The effect is that distinct breeds in countries most widely apart acquire the same
characteristics. Over and over the same traits appear in separated populations of organisms as the result of mutations from
within. Animal breeders like exuberant patterns and rarities; that is exactly what they are looking for to select. Aberrant traits
that are normally under stringent negative selection, as might be the case for the pigeons reversed feathers, may readily
become visible as soon as the selective pressure is relieved; that is, when organisms are reared and fed in the protective
environment of captivity. Darwin called the phenomenon of independent acquisition of the same traits analogous variation. It
is a common phenomenon well known to breeders, and Darwin easily found more examples of analogous variation:The
frequent presence of fourteen or even sixteen tail-feathers in the pouter, may be considered as a variation representing the
normal structure of another race, the fantail. I presume that no one will doubt that all such analogous variations are due to
the several races of the pigeon having inherited from a common parent the same constitution and tendency to variation,
when acted on by similar unknown influences. In the vegetable kingdom we have a case of analogous variation, in the
enlarged stems, or roots as commonly called, of the Swedish turnip and Ruta baga [sic] plants which several botanists rank
as varieties produced by cultivation from a common parent: if this be not so, the case will then be one of analogous variation
in two so-called distinct species; and to these a third may be added, namely, the common turnip. According to the ordinary
view of each species having been independently created, we should have to attribute this similarity in the enlarged stems of
these three plants, not to the vera causa of community of descent, and a consequent tendency to vary in a like manner, but
to three separate yet closely related acts of creation. 5Analogous variation originates in the genome. Through rearrangement
and/or transposition of DNA elements, previously silent (cryptic) traits can be activated. The underlying molecular
mechanism cant be merely random; if it were, then Darwin, and other breeders, would not have observed the expression of
the same traits independently of each other. A more contemporary translation of analogous variation would benonrandom (or: non-stochastic) variation, and it implies some sort of mechanism.
Reversions
In the excerpt above, Darwin also describes what he calls reversions. By this term he meant traits that are present in
ancestors, then disappear in first generation offspring, and then reappear in subsequent generations. Darwin acknowledged
that unknown laws of inheritance must exist, but still he talks about the proportion of blood. Reversions are easily explained
as traits present on separate chromosomes, and the inheritance of such traits is best understood from Gregor Mendels
inheritance laws. Through Mendels discovery of the genetic laws that underlie the inheritance of traits associated with
chromosome segregation (a hallmark of sexual reproduction), Mendel gave us a quantum theory of inheritance. He found
that traits are always inherited in well-defined and predictable proportions, and do not just come and go. Darwins
reversions are traits that reappear in later generations due to the inheritance of the same genes (alleles) from both
parents.5 Darwin didnt know about Mendels laws of inheritance, neither did he know about how variation is generated in
genomes. What Darwin described inOrigin, however, is that variation in offspring is a rule of biology. What Darwin described
in isolated species (whether domesticated breeds or island-bound birds) was the result of a burst of abundant speciation
resulting from multipurpose genomes. Variant breeds of pigeons are the phenotypes of a rearranged multipurpose pigeon
genome. The Galpagos finches (with their distinct beaks and body sizes) are the phenotypes of a rearranged multipurpose
finch genome. Where does the variation stem from in populations of Galpagos finches?Darwin was well aware of the
profound lack of knowledge on the origin of variation, and did not exclude mechanisms or laws to drive biological variation:I
have hitherto sometimes spoken as if the variations so common and multiform in organic beings under domestication, and in
a lesser degree in those in a state of nature had been due to chance. This, of course, is a wholly incorrect expression, but it
serves to acknowledge plainly our ignorance of the cause of each particular variation. 6Since Darwins days, almost all
corners of the living cell have been explored and our biological knowledge has expanded greatly. Through a vast library of
data generated by new research in biology, we now have the answers to many questions of a biological nature that had
puzzled Darwin. We may also have the answer to the cause of each particular variation, although we may not be aware of
it (yet). That is not because it is hidden between billions of other books and hard to find. No, it is because of the Darwinian
paradigm. The mechanism(s) that drive biological variations have been elucidated but are not yet recognized as such.One
of the findings of the new biology was that the DNA of most (if not all) organisms contains jumping genetic elements. The
mainstream opinion is that these elements are the remnants of ancient invasions of RNA viruses. RNA viruses are a class of
viruses that use RNA molecule(s) for information storage. Some of them, such as influenza and HIV, pose an increasing
threat to human health. Are virus invasions responsible for all the beautiful intricate complexity of organic beings? Is a virus
a creator? Most likely it is not. Otherwise why would we pump billions of research dollars into research to fight off viruses?
Could it be that mainstream science is mistaken?
The RNA virus paradox
Here is one good reason for believing that mainstream science is indeed mistaken: the RNA virus paradox. It has been
proposed that these RNA viruses have a long evolutionary history, appearing with, or perhaps before, the first cellular life
forms.7 Molecular genetic analyses have demonstrated that genomes, including those of humans and primates, are riddled
with endogenous retroviruses (ERVs), which are currently explained as the remnants of ancient RNA virus-invasions. RNA
virus origin can be estimated using homologous genes found in both ERVs and modern RNA virus families. By using the
best estimates for rates of evolutionary change (i.e. nucleotide substitution) and assuming an approximate molecular
clock,8,9 the families of RNA viruses found today could only have appeared very recently, probably not more than about
50,000 years ago.10 These data imply that present-day RNA viruses may have originated much more recently than our own
species. The implication of a recent origin of RNA viruses and the presence of genomic ERVs poses an apparent paradox
that has to be resolved. I will argue, in order to resolve the paradox, we should abstain from the mainstream idea that ERVs
are remnants of ancient RNA virus invasions.Solving the RNA paradox can only be accomplished by asking questions. First,
we have to ask ourselves, What do scientists mean when they refer to genetic elements as endogenous
retroviruses (ERVs)? In addition, we have to ask, How do ERVs behave, and whatif anyare their functions? ERVs have
been extensively studied in microorganisms, such as bakers yeast (Saccharomices cerivisiae) and the common gut
bacterium Escherichia coli. Most of our knowledge on the mechanisms of transposition of ERVs comes from those two
organisms. In yeast, the ERV known as Ty is flanked by long terminal repeats and specifies two genes, gag and pol, which
are similar to genes found in free operating RNA viruses. This is the main argument why scientists believe RNA viruses and
ERVs are evolutionarily closely related. The long terminal repeats enable the ERV to insert into the hosts DNA. The
transposition and integration is a stringently regulated process and seems to be target or site-specific. 11,12 During the
transpositions of an ERV, the hosts RNA polymerase II makes an RNA template, which is polyadenylated to become
messenger RNA. Thegag and pol mRNAs are translated and cleaved into several individual proteins. The gag gene
specifies a polyprotein that is cleaved into three proteins, which form a capsid-like structure surrounding the ERVs RNA. We

may ask here: why is a capsid involved? It should be noted that single stranded RNA molecules are very sticky nucleotide
polymers and the capsid may prevent the ERV from sticking at wrong places. The capsid may also be required to direct the
ERV to the right spots in the genome. The pol polyprotein is cleaved into four enzymes: protease, reverse transcriptase,
RNase and integrase. Protease cleaves the polyproteins into the individual proteins and then the RNA and proteins are
packed into a retrovirus-like particle. Reverse transcriptase forms a single-stranded DNA molecule from the ERV RNA
template, whereas RNase removes the RNA. The DNA is then circularized and the complementary DNA strand is
synthesized to create a double-stranded, circular copy of the ERV, which is then integrated into a new site in the hosts
genomic DNA by integrases activity. This intricate mechanism for transposition of ERVs seems to be irreducibly complex
(and thus a sign of intelligent design) since all ERVs and RNA viruses use the same or similar genetic components.
Variation-inducing genetic elements (VIGEs).
What can the function, if any, of ERVs be? If we follow the mainstream opinion, ERVs integrated into the genomes a very
long time ago as viral infections. Currently, ERVs are not particularly helpful. They merely hop around in the genome as
selfish genetic elements that serve no function in particular. They are mainly upsetting the genome. Long ago, however,
RNA viruses are alleged to have significantly contributed to evolution by helping to shape the genome.Its hard to imagine
this story to be true, and not only because of the RNA virus paradox. Modern viruses usually do not integrate into the DNA of
the germ line-cells; that is, the genes of an RNA virus dont usually become a part of the heritable material of the infected
host. If we obey the uniformitarian principle, we are allowed to argue: What currently doesnt happen didnt happen a long
time ago, either. To answer the question raised above, we must start finding out more about some biological characteristics
of a less complicated jumping genetic element, the so-called insertion-sequence (IS) element. IS elements are DNA
transposons abundantly present in the genomes of bacteria. IS elements share an important characteristic with ERVs:
transposition. Genome shuffling takes place in bacteria so frequently that we can hardly speak of a specific gene order. The
shuffling of pre-existing genetic elements may unleash cryptic information instantly as the result of position effects. Shuffling
seems to be an important mechanism to generate variation. But what is the mechanism for genome shuffling? The answer
to this question comes unexpectedly from evolutionary experiments, in which genetic diversity (evolutionary change) was
determined between reproducing populations of E. coli. During the breeding experiment, which ran for two decades, it was
observed that the number and location of IS (insertion sequence) elements dramatically changed in evolving populations,
whereas point mutations were not abundant.13After 10,000 generations of bacteria, the genomic changes were mostly due to
duplication and transposition of IS elements. A straightforward conclusion would thus be that jumping genetic elements,
such as the IS elements, were designed to deliberately generate variationvariation that might be useful to the organism. In
2004, Lenski, one of the co-authors of the studies, demonstrated that the IS elements indeed generate fitness-increasing
mutations.14 In E. coli bacteria IS elements activate crypticor silentcatabolic operons: a set of genetic programs for food
digestion. It has been reported that IS element transposition overcomes reproductive stress situations by activating cryptic
operons, so that the organism can switch to another source of food. IS elements do so in a regulated manner, transposing at
a higher rate in starving cells than in growing cells. In at least one case, IS elements activated a cryptic operon during
starvation only if the substrate for that operon was present in the environment. 15It is clear that in Lenskis experiments, IS
elements did not evolve over night. Rather, the IS elements reside in the genome of the original strain. During the two
decades of breeding, the IS elements duplicated and jumped from location to location. There was ample opportunity to
shuffle genes and regulatory sequences, and plenty of time for the IS elements to integrate into genes or to simply redirect
regulatory patterns of gene expression. Microorganisms may thus induce variation simply through shuffling the order of
genes and put old genes in new contexts: variation through position effects that can be inherited and propagated in time. Its
hardly an exaggeration to state that jumping genetic elements specified by the bacteriums genome generated the new
phenotypes.Transposition of IS elements is mostly characterized by local hopping, meaning that novel insertions are usually
in the proximity of the previous insertion and may be a more-or-less random phenomenon; the site of integration isnt
sequence dependent. Bacteria have a restricted set of genes and they divide almost indefinitely. Therefore, sequencedependent insertion and stringent regulation of transposition may not be required for IS-induced reshuffling of bacterial
genomes; in a population of billions of microorganisms all possible chromosomal rearrangements may occur due to
stochastic processes. In higher organisms the order of genes in the chromosomes is more important, but there is no
reason to exclude jumping genetic elements as a factor affecting the expression of genetic programs through position
effects. Transposable elements may therefore be a class of variation-inducing genetic elements (VIGEs) in higher
organisms. Indeed, ERVs, LINEs and SINEs resemble IS elements in bacteria in that they are able to transpose. In fact,
these elements may be responsible for a large part of the variability observed in higher organisms and may even be
responsible for adaptive phenotypes. The genomic transposition of VIGEs is not just a random process. As observed
for Ty elements in yeast, integration of all VIGEs may originally have been designed as site or sequence specific. It should
be noted that VIGEs might qualify as redundant genetic elements, of which the control over translocation may have
deteriorated over time.
VIGEs in humans
Mobile genetic elements make up a considerable part of the eukaryotic genome and have the ability to integrate into the
genome at a new site within their cell of origin. Mobile genetic elements of several classes make up more than one third of
the human genome.Human endogenous retroviruses (ERVs) are, as with yeast ERVs, first transcribed into RNA molecules
as if they were genuine coding genes. Each RNA is then transformed into a double stranded RNA-DNA hybrid through the
action of reverse transcriptase,
an enzyme specified by the
retrotransposon itself. The hybrid
molecule is then inserted back
into the genome at an entirely
different location. The result of
this copy-paste mechanism is two
identical copies at different
locations in the genome. More
than 300,000 sequences that
classify as ERVs have been
found in the human genome,
which is about 8% of the entire
human DNA.16
Figure
1. Variation-inducing
genetic elements (VIGEs) are
found throughout all biological

domains, ranging from bacteria to mammals. In yeast, insects and mammals we observe similar designs. (Homologous
sequences are indicated by the same colour).Long terminal repeats retrotransposons (LTR retrotransposons) are
transcribed into RNA and then reverse transcribed into a RNA-DNA hybrid and reinserted into the genome. LTRs and
retroviruses are very similar in structure. Both contain gag and pol genes (figure 1), which encode a viral particle coat
(GAG), reverse transcriptase (RT), ribonuclease H (RH) and integrase (IN). These genes provide proteins for the conversion
of RNA into complementary DNA and facilitate insertion into the genome. Examples of LTR retrotransposons are human
endogenous retroviruses (HERVs). Unlike RNA retroviruses, LTR retrotransposons lack envelope proteins that facilitate
movements between cells.Non-LTR retrotransposons, such as long interspersed elements (LINEs), are long stretches
(4,0006,000 nucleotides) of reverse transcribed RNA molecules. LINEs have two open reading frames: one encoding an
endonuclease and reverse transcriptase, the other a nucleic acid binding protein (figure 1). There are approximately
900,000 LINEs in the human genome, i.e. about 21% of the entire human DNA. LINEs are found in the human genome in
very high copy numbers (up to 250,000).17Short interspersed elements (SINEs) constitute another class of VIGEs that may
use an RNA intermediate for transposition. SINEs do not specify their own reverse transcriptase and therefore they
are retroposons by definition. They may be mobilized for transposition by using the enzymatic activity of LINEs. About one
million SINEs make up another 11% of the human genome. They are found in all higher organisms, including plants, insects
and mammals. The most common SINEs in humans are Alu elements. Alu elements are usually around 300 nucleotides
long, and are made up of repeating units of only three nucleotides. Some Alu elements secondarily acquired the genes
necessary to hop around in the genome, probably though recombination with LINEs. Others simply duplicate or delete by
means of unequal crossovers during cell divisions. More than one million copies of Alu elements, often interspersed with
each other, are found in the human genome, mostly in the non-coding sections. Many Alu-like elements, however, have
been found in the introns of genes; others have been observed between genes in the part responsible for gene regulation
and still others are located within the coding part of genes. In this way SINEs affect the expression of genes and induce
variation. Alu elements are often mediators of unequal homologous recombinations and duplications.18
Figure 2. Schematic view of the central role VIGEs may play to
generate variation, adaptations and speciation events. Lower part:
VIGEs may directly modulate the output of (morpho)genetic
algorithms due to position effects. Upper part: VIGEs that are located
on different chromosomes may be the result of speciation events,
because their homologous sequences facilitate chromosomal
translocations and other major karyotype rearrangements.Repetitive
triplet sequences (RTSs) present in the coding regions of proteins are
a class of VIGEs that cannot actively transpose. RTSs are usually
found as an intrinsic part of the coding region of proteins. For
instance, RTSs can be formed by a tract of glycine (GGC), proline
(CCG), or alanine (GCC). Usually RTSs form a loop in the messenger
(m)RNA that provides a docking site for chaperone molecules or
proteins involved in the mRNA translation. RTSs may increase or
decrease in length through slippery DNA polymerases during DNA
replication.
Conclusions and outlook
Now that we have redefined ERVs as a specific class of VIGEs, which
were present in the genomes from the day they were created, it is not
difficult to see how RNA viruses came into being. RNA viruses have
emerged from VIGEs. ERVs, LINEs and SINEs are the genetic
ancestors of RNA viruses. Darwinists are wrong in promoting ERVs as remnants of invasions of RNA viruses; it is the other
way around. In my opinion, this view is supported by several recent observations. RNA viruses contain functional genetic
elements that help them to reproduce like a molecular parasite. Usually, an RNA virus contains only a handful of genes.
Human Immunodeficiency virus (HIV), the agent that causes AIDS, contains only eight or nine genes. Where did these
genes come from? An RNA world? From space? The most parsimonious answer is: the RNA viruses got their genes from
their hosts.The Rous arcoma virus (RSV), which has the ability to cause tumours, has only 4 genes: gag, pol, envand src. In
addition, the virus is flanked by a set of repeat sequences that facilitate integration and promote
replication. Gag, pol and env are genes commonly present in ERVs. The src gene of RSV is a modified hostderived src gene that normally functions as a tyrosine kinasea molecular regulator that can be switched on and off in order
to control cell proliferation. In the virus, the regulator has been reduced to an on-switch only that induces uncontrolled cell
proliferation. The src gene is not necessary for the survival of RSV, and RSV particles can be isolated that have only
the gag, pol and env genes. These have perfectly normal life cycles, but do not cause tumours in their host. It is clear the
virus picked up the src gene from the host. Why wouldnt the whole vector be derived from the host? VIGEs may easily pick
up genes or parts thereof as the result of an accidental polymerase II read-through. This will increase the genetic content of
the VIGE because the gene located next to the VIGE will also be incorporated. An improper excision of VIGEs may also
include extra genetic information. Imagine for instance HERV-K, a well-known human-specific endogenous retrovirus,
transposing itself to a location in the genome where it sits next to thesrc gene. If in the next round of transposition a part of
the src gene was accidentally added to the genes of HERV-K, it has already transformed into a fully formed RSV (see figure
3). It can be demonstrated that most RNA viruses are built of genetic information directly related to that of their hosts.
Figure 3. RNA viruses originate from VIGEs through
the uptake of host genes. In the controlled and
regulated context of the host DNA, genes and VIGEs
are harmless. A combination of a few genes
integrated in VIGEs may start an uncontrolled
replication of VIGEs. In this way, VIGEs may take up
genes that serve to form the virus envelope (to wrap
up the RNA molecule derived from the VIGE) and
genes that enable them to leave and re-enter host
cells. Once VIGEs become full-blown shuttle vectors
between hosts, they act as virulent, devastating and
uncontrolled replicators. Hence, harmless VIGEs may
degenerated into molecular parasites in a similar way normally harmless cells turn into tumors once they lose the power to
control cell replication. VIGEs are on the basis of RNA viruses, not the other way around. The scheme outlined here shows

how the Rous sarcoma virus (RSV) may have formed from a VIGE that integrated the env gene and part of the src gene (a
proto-oncogene: for details see text).The outer membranes of influenza viruses, for instance, are built of hemagglutinin and
neuraminidase molecules. Neuraminidase is a protein that can also be found in the genomes of higher host organisms,
where it serves the function to modify glycopeptides and oligosaccharides. In humans, neuraminidase deficiency leads to
neurodegenerative lysosomal storage disorders: sialidosis and galactosialidosis. 19 Even so-called orphan genes, genes that
are only found in viruses, can usually be found in the host genomes. Where? In VIGEs!To become a shuttle-vector between
organisms, all that is required is to have the right tools to penetrate and evade the defenses of the host cell. HIV, for
instance, acquired part of the gene of the hosts defence system (the gp120 core) that binds to the human beta-chemokine
receptor CCR5.20These observations make it plausible that all RNA viruses have their origin in the genomes of living cells
through recombination of hosts DNA elements (genes, promoters, enhancers). Every now and then such an unfortunate
recombination produces a molecular replicator: it is the birth of a new virus. Once the virus escapes the genome and
acquires a way to re-enter cells, it has become a fully formed infectious agent. It has long been known that bacteria use
genes acquired from bacteriophagesi.e. bacterial viruses that insert their DNA temporarily or even permanently into the
genome of their hostto gain reproductive advantage in a particular environment. Indeed, work reaching back decades has
shown that prophage (the integrated virus) genes are responsible for producing the primary toxins associated with diseases
such as diphtheria, scarlet fever, food poisoning, botulism and cholera. Diseases are secondary entropy-facilitated
phenomena. Virologists usually explain the evolution of viruses as recombination: that is, a mixing of pre-existing viruses, a
reshuffling and recombination of genes.21 In bacteria, viruses may therefore be recombined from plasmids carrying survival
genes and/or transposable genetic elements, such as IS elements.
Discussion
Where did all the big, small and intermediate noses come from? Why are people tall, short, fat or slim? What makes
morphogenetic programs explicit? The answer may be VIGEs. It may turn out that the created kinds were designed with
baranomes that had an ability to induce variation from within. This radical view implies that the baranome of man may have
been designed to contain only one morphogenetic algorithm for making a nose. But the program was implicit. The program
was designed in such way that a VIGE easily integrated into it, becoming a part of it, hence making the program explicit.
Most inheritable variation we observe within the human population may be due to VIGEsElements that affect
morphogenetic and other programs of baranomes. It should be noted that a huge part of the genomic sequences are
redundant adaptors, spacers, duplicators, etc., which can be removed from the genome without major affects on
reproductive success (fitness). In bacteria, VIGEs have been coined IS elements; in plants they are known as transposons;
and in animals, they are called ERVs, LINEs, SINEs, and microsatellites. What these elements are particularly good at is
inducing genomic variation. It is the copy number of VIGEs and their position in the genome that determine gene expression
and the phenotype of the organism. Therefore, these transposable and repetitive elements should be renamed after their
function: variation-inducing genetic elements. VIGEs explain the variations Darwin referred to as due to chance.I will
address the details of a few specific classes of VIGEs and argue why modern genomes are literally riddled with VIGEs in a
future article. With the realization that RNA viruses have emerged from VIGEs the RNA paradox is solved. For many
mainstream scientists this solution will be bothersome because VIGEs were frontloaded elements of the baranomes of
created kinds and that implies a young age for their common ancestor and that all life is of recent origin.
The design of life: part 4variation-inducing genetic elements and their function
by Peer Terborg
Endogenous retroviruses (ERVs) are claimed to be the selfish remnants of ancient RNA viruses that invaded the cells of
organisms millions of years ago and now merely free-ride the genome in order to be replicated. This selfish gene thinking
still dominates the public scene, but well-informed biologists know that the view among researchers is rapidly changing.
Increasingly, ancient RNA viruses and their remnants are being thought of as having played (and still do) a significant role in
protein evolution, gene structure, and transcriptional regulation. As argued in part 3 of this series of articles, ERVs may be
the executors of genetic variation, and qualify as specifically designed variation-inducing genetic elements (VIGEs)
responsible for variation in higher organisms. VIGEs induce variation by duplication, transposition, and may even rearrange
chromosomes. This extraordinary claim requires extraordinary scientific support, which is present throughout this paper. In
addition, the VIGE hypothesis may be a framework to understand the origin of diseases and explain rapid speciation events
through facilitated chromosome swapping.
The idea that mobile genetic elements are involved in creating variation is not new. Barbara McClintock, who discovered the
first mobile genetic elements in maize, was also the first to recognize the true nature of such jumping genetic elements. In
1956, she suggested that transposons (as she coined them) function as molecular switches that could help determine when
nearby genes turn on and off. Her key insight was that all living systems have mechanisms available to restructure and
repair the chromosomes. When it was discovered that more than half of the human genome consists of (remnants of)
mobile elements, McClintocks ideas were revived and further developed by Roy Britten and Eric Davidson. 1 It is only
recently that we have begun to understand the power of VIGEs (variation-inducing genetic elements) as genetic regulators
and switches. A team of investigators lead by Haussler recently provided direct evidence that even when a short
interspersed nucleotide element (SINE) lands at some distance from a gene, it can take on a regulatory role with powerful
regulatory functions.2Haussler and his colleagues then looked at a particular examplea copy of the ultra-conserved
element that is near a gene called Islet 1 (ISL1). ISL1 produces a protein that helps control the growth and differentiation of
motor neurons. In the laboratory of Edward Rubin at the University of California, Berkeley, postdoctoral fellow Nadav Ahituv
combined the human version of the LF-SINE sequence with a reporter gene that would produce an easily recognizable
protein if the LF-SINE were serving as its on-off switch. He then injected the resulting DNA into the nuclei of fertilized mouse
eggs. Eleven days later, he examined the mouse embryos to see whether and where the reporter gene was switched on.
Sure enough, the gene was active in the embryos developing nervous systems, as would be expected if the LF-SINE copy
were regulating the activity of ISL1.3This excerpt shows that some functions of SINEs are easily uncovered because they
are directly affecting the expression of a particular gene. However, most functions of SINEs may not be as easily detected
as described of above, because they can integrate in gene desertsregions of the genome where the chromosomes are
devoid of any recognizable protein-coding genesor they may only subtly affect expression of morphogenetic programs.
Gene expression patterns largely determine how cells behave and determine the morphology of organisms. VIGEs
integrated in such genetic programs will change expression patterns of genes that will result in different cellular behaviour
and morphology. Whether the ultimate effect on the phenotype of the organism can be predicted, however, remains to be
established. This is largely due to the fact that we still do not know what morphogenetic algorithms look like. Of course,
biologists have argued that evolution and development are determined by homeobox (HOX) genes, but HOX genes are
merely executors of developmental (or morphogenetic) programs; they are not the programs themselves.In another study by

the same group, thousands of short identical DNA sequences that are scattered throughout the human genome were
analyzed. Many of those sequences were located in gene deserts, which are in fact so clogged with regulatory DNA
elements that they have recently been renamed regulatory jungles. But what do they regulate? The answer could be
morphogenesis. Most of the short DNA elements cluster near genes that play a decisive role during an organisms first
weeks after conception. The elements help to orchestrate an intricate choreography of when-and-where developmental
genes are switched on and off as the organism lays out its body plan. These elements may provide a sort of blueprint for
how to build the animal. The exact mechanism as to how such sequences may function as a plan to build an animal is not
entirely clear, but the DNA elements are particularly abundant near genes that help cells to stick together. That stickiness is
important in an organisms early life phase because these genes help cells to migrate to the right location and to form into
organs and tissues of the correct shape. The 10,402 short DNA sequences studied by Bejerano are derived from
transposable genetic elementsretrotransposons that duplicate themselves and hop around the genome. Apparently,
transposable genetic elements are not what they have been mistakenly thought to be: mess makers. Indeed, the view that
transposable elements are just bad stuff is rapidly changing. In an interview with Science Daily, Bejerano says:We used to
think they were mostly messing things up. Here is a case where they are actually useful.4The genome is literally littered with
thousands of transposable elements. The word is that when ancient retroviruses slipped bits of their DNA into the primate
genome millions of years ago, they successfully preserved their own genetic legacy. 5 It is hard to imagine that they all have
functions, but their presence could certainly determine or fine-tune the output of nearby genes. In this way they may create
subtle, but novel, variation. Bejerano and Hausslers research has already identified a handful of transposons that serve as
regulatory elements, but it is not clear how common the phenomenon might be. The 2007 study showed that the
phenomenon may be a general one:Now weve shown that transposons may be a major vehicle for evolutionary
novelty.4The new findings indeed show that, in many cases, transposable elements function as regulators of gene output,
but major vehicles for evolution from microbe to man they are not. The transposition of jumping genetic elements may
certainly affect gene expression patterns, but it does not follow that they produce new genetic information. Considering the
biological data, it seems reasonable that transposable elements are present in the genome to deliberately induce biological
variation. Transposable elements thus qualify as variation-inducing genetic elements (VIGEs), and by leaving copies, they
make sure the new variation is heritable. The transposable elements present in regulatory jungles do not produce new
biological information, but they induce variation in the genetic algorithms and may underlie rapid adaptive radiation from
uncommitted pluripotent genomes. The regulatory jungles may provide an active reservoir of VIGEs that put existing genes
in new regulatory environments.
Regulated activity of VIGEs
The chromosome of the E. coli strain K12 includes three cryptic operons (linear genetic programs that encode programs to
metabolize three alternative sugars): one for cellobiose, one for arbutin and one for salicin. The organization of those
operons is like a normal substrate-induced bacterial operon; but the operons themselves are abnormal in that they are
cryptic (silent) in wild-type strains. Even in the presence of alternative sugars the operons are not activated, which indicates
that these bacteria dont readily use alternative sugars. Unused cryptic operons are redundant genetic programs that are not
observed by natural selection:As cryptic genes are not expressed to make any positive contribution to the fitness of the
organism, it is expected that they would eventually be lost due to the accumulation of inactivating mutations. Cryptic
genes would thus be expected to be rare in natural populations. This, however, is not the case. Over 90% of natural isolates
of E. coli carry cryptic genes for the utilization of beta-glucoside sugars. These cryptic operons can all be activated by IS
[insertion-sequence] elements, and when so activated allow E. colito utilize beta-glucoside sugars as sole carbon and
energy sources.6The excerpt shows that operons are kept inactive by repressors; that is, proteins that sit on the DNA of the
operon to ward off the nanomachines responsible for gene expression. Operons will only be active in bacteria that dont
have a functional gene coding for the repressors. Disrupting the repressor gene releases the cryptic programs. Thats where
the VIGEs come in. The transposition and integration of an IS element into the silencer elements is the mutational event that
activates the cryptic operon. Usually, the lack of an appropriate carbon and energy source triggers transposition of IS
elements. The transposition of IS elements appears to be regulated by starvation, and the integration in the repressor gene
is not utterly random. For instance, position 472 in the ebgR gene in the ebg operon of E. coli is a hotspot for integration of
IS elements, but only under starvation conditions. VIGEs may thus accumulate and integrate at well-defined positions in the
genome; this indicates a site-specific mechanism.In the fruit fly, some non-LTR (long terminal repeats retrotransposons)
integrate at very specific sites, but some others have been shown to integrate more or less at random. The specificity is
determined by endonucleases, enzymes that cut the DNA.7 Assuming VIGEs are part of a designed genome, we must
expect that their transposition and activity can be controlled and regulated. To avoid deleterious effects on the host and
retrotransposon, we may expect that the activity of VIGEs is regulated both by retrotransposon-and host-encoded factors.
Indeed, the mechanism of transposition seems to be dictated by the species in which the VIGEs operate. Recent research
has shown that in zebra fish the transposable element known as NLR integrant usually carries a few extra nucleotides at the
far end of the sequence, but it is not expressed in human cells.8 This observation would argue for the involvement of host
specific protein machinery in transpositionone more argument for the design origin of VIGEs.From the design perspective,
we may expect that the activity of VIGEs used to be a tightly controlled process. This is because the genomes in which they
operate also specify control factors: retroviral restriction factors. The restriction factors are proteins with the ability to bind to
retroviral capsid proteins and target them for degradation. Several restriction factors have been identified, including Fv1,
Trim5-alpha and Trim5-CypA.9 These factors share the common property of containing sequences that promote selfassociation: that is, they can assemble themselves. This fact, together with the observation that the restriction factors are
encoded by unrelated genes, is clear evidence of purposeful design. Retroviral restriction factors play an important role in
innate immunity against invading RNA viruses. For instance, Trim5-alpha binds directly to the incoming retroviral capsid core
and targets its premature disassembly or destruction. 10 In addition, some integrated VIGEs show evolutionary-tree
deviations, indicating a sequence-specific integration/excision mechanism. For instance, Alu HS6 is present in human,
gorilla and orangutan, but not in chimpanzee (see figure 1). This highly peculiar observation prompted the investigators to
consider the possibility of the specific excision of this Alu element from the chimpanzees genome.11 Precise excision implies

precise integration.

Figure 1. The Alu HS6 insertion sites in human, chimpanzee, gorilla, orangutan and owl monkey. Note the complete
absence in chimpanzee and owl monkey of any evidence for an extraction site. This suggests a highly specific mechanism
for integration and/or extraction. Alternatively, the sequences are a molecular falsification of the common descent of
primates.
Biologists specializing in synthetics at the Johns Hopkins University have built, from scratch, a LINE1-based retrotransposon
a genetic element capable of jumping around in the mouse genome. The man-made retrotransposon was designed to be
a far more effective jumper than natural retrotransposons; indeed, it inserts itself into many more places in the
genome.12,13 Why do not all LINEs jump so effectively? The scientists that constructed the synthetic LINE changed the
regulator sites used in transposition. Native LINE1 elements are relatively inactive in mice when they are introduced into the
mouse genome as transgenes. The synthetic LINE1-based element, ORFeus, contains two synonymously recoded ORFs
relative to mouse L1 and is far more active. This indicates that the integration and excision of native LINE1 elements are
controlled and regulated by an as yet unknown mechanism.VIGEs qualify as redundant genetic elements that can simply be
erased from the genome without fitness effects. As long as VIGEs do not upset critical genomic functions and do affect
reproductive success of the carrier, they are selectively neutral. Therefore, not only VIGEs, but also the mechanisms by
which they integrate, may readily wither and degrade due to accumulation of debilitating mutations. The control over
integration and activity we observe today may be less stringent compared to how it was originally designed. The originally
fine-tuned control for excision and transposition may have deteriorated over time and what is left today are more or less free
moving elements that may predominantly cause havoc when they integrate in the wrong location. It is easy to understand
how, for instance, endonucleases became less specific through mutations. This view may also explain why VIGEs are often
found associated with heritable diseases. As long as VIGE activity and integration do not significantly affect the fitness of the
organisms in which they operate, they are free to copy and paste themselves along the genome. Indeed, inactivating VIGEs
have been observed in genes not immediately required for reproduction. The GULO gene, which qualifies as a redundant
gene in populations with high vitamin C intake, has been hit several times by VIGEs and this may have contributed to
pseudogenization of GULO in humans.14Over time, VIGEs may have become increasingly detrimental to the hosts genome.
That is because information that regulates the integration and activity of VIGEs is subject to mutation. Some VIGEs have
been associated with susceptibility or resistance to diseases. In asthma, increased susceptibility appears to be associated
with microsatellite DNA instability (a term used for copy-number differences in repetitive DNA sequences). 15 Psoriasis is also
associated with HERV expression.16 It should be clear that deregulated and uncontrolled VIGEs cause havoc when they
integrate with and disrupt functional parts of genes.From the vantage of design, VIGE transpositions would make sense
during meiosis, which is the process leading to the formation of gametes. Controlled activity of VIGEs during meiosis may be
responsible for variation that can be passed on to the offspring. Although information is scant, it has been shown in
fungi17 and plants18 that VIGEs become active during meiosis and even have mechanisms to silence deleterious bystandereffects, such as deleterious point mutations.17 This shows transposable elements function to induce genetic variation,
providing the flexibility for populations to adapt successfully to environmental challenges. In chimpanzees, for instance, it
has been documented that large blocks of compound repetitive DNA, which have demonstrated retrotransposon function,
induce and prolong the bouquet stage in meiotic prophase and affect chiasm formations. 19 This may seem like a mouthful,
but it merely means that these repetitive genetic elements facilitate sister-chromosome exchanges when reproductive cells
(sperm and eggs) are being generated. Mammalian VIGEs, in particular Alu sequences, have the ability to induce genetic
recombination and duplications and contribute to chromosomal rearrangements, and they may account for the major part of
variation observed in humans. The methylation pattern of Alu sequences possibly determine activity and/or serve as
markers for genomic imprinting or in maintaining differences in male and female meiosis. 21
VIGEs and the human family
When short triplet repeat units are present in the coding part of a gene, they may even have functional consequences.
There is evidence that repeat units in the Runx2 gene formed the bent snout of the Bullterrier in a few
generations.22 Likewise, in mice and dogs, having five or six toes is determined by a repeat unit in the Alx4 gene.23 These
novel phenotypes can form almost over night, i.e. within one generation. Repetitive coding triplets that can be gained or lost
provide another mechanism to generate (instant) variation. It should be noted that this mechanism leads to reversible
genetic change, because a lost repetitive unit can readily be added back through duplication of a preexisting one, and vice
versa. Therefore, the RTS mechanism may explain seasonal changes in beak size observed for Galapagos finches,
adaptive phenotypes in Australian snakes and the evolution of the Cichlid varieties in African lakes.If we accept the idea of
deliberately designed VIGEs, we may also expect these elements to have played an important role in determining the
variety of human phenotypes. In other words, human races are the result of the activity of VIGEs! Biologists used to think
that our genomes all had the same basic structurethe same number of genes, in roughly the same order, with a few minor
differences in the sequence of DNA bases. Now, technologies that compare whole human genomes are revealing that this
picture is far from complete. Michael Wigler at Cold Spring Harbor Laboratory provided the first evidence that human
genomes are strikingly variable: his group showed marked differences in the copy number of protein-coding
genes.24 Apparently, some people have more copies of certain genes and, large-scale copy number polymorphisms (CNPs)
(about 100 kilobases and greater) contribute substantially to genomic variation between individuals. 25 In addition, people not
only carry different copy numbers of parts of our DNA they also have varying numbers of deletions, insertions and other
major rearrangements in their genomes.In 2005, Evan Eichler of the University of Washington reported 297 locations in the
genome where different individuals have different forms of major structural variations. At these spots some carry a major
deletion, for example, or an extra hundred bases of DNA. Differences between individuals were found in the protein-coding
genes; structural differences were also observed between individual genomes.26 From these and other studies we now know
that every one of us shares only about 99% of our DNA with all the other people on Earth. 27 The difference is due to
repetitive sequences that easily amplify or delete parts from the genome. With this, we have discovered another class of
VIGEs. The highly variable repetitive sequences also explain why genetic screening methods are so reliable nowadays: they
detect copy-number differences and hence are capable of discriminating between the DNA of a father and his son. Yes,
fathers and sons apparently differ at the level of VIGEs!A comparison of Asian and Caucasian people showed that 25% of
more than 4,000 protein-coding genes had significantly different expression patterns. Some gene expression levels differed
as much as twofold.28 The researchers commented that these findings support the idea that there are genetically
determined characteristics that tend to be clustered in different ethnic groups. Some genes are simply not expressed at all,
or are simply not present in the genomes. For instance, the gene UGT2B17 is deleted more often in Asians than in
Caucasians, and has a mean expression level that was more than 20 times greater in Caucasians relative to Asians. How
can such big differences be explained? Of course, single nucleotide polymorphisms (SNP; i.e. point mutations) in regulatory
sequences could affect gene regulation patterns. It is not clear, however, whether the SNPs themselves might be regulating
gene expression or whether they travel together with other DNA thats the regulator. We may also expect VIGEs to be
responsible for differences observed between human races.

VIGEs and chromosome 2


Human chromosome 2 looks as if it is the product of the fusion of two chromosomes that we find in chimpanzees as
chromosome 12 and 13. Therefore, some Darwinists take human chromosome 2 as the ultimate evidence for common
descent with chimpanzees. We know that a fusion of two ancestral chromosomes would have produced human
chromosome 2 with two centromeres. Currently, human chromosome 2 has only one centromere, so there must be
molecular evidence for remnants of the other. In 1982, Yunis and Prakash studied the putative fusion site of chromosome 2
with a technique known as fluorescence in situ hybridization (FISH) and reported signs of the expected centromere.29In
1991, another study also reported signs of the centromere.30 In 2005, after the complete sequencing of human chromosome
2, we would have expected full proof of the ancestors centromere. However, even after intense scrutiny there are still
only signs of the centromere. If signs of the centromere were already observed in 1982, why can it not be proved in the 2005
sequence analysis? Apparently, the site mutated at such high speed it is no longer recognizable as a centromere:During the
formation of human chromosome 2, one of the two centromeres became inactivated (2q21, which corresponds to the
centromere of chromosome 13) and the centromeric structure quickly deteriorated. 31Why would it quickly deteriorate? Why
would this region deteriorate faster than neutral? A close up scrutiny in 2005 showed the region that has been interpreted as
the ancestors centromere to be built from sequences present in 10 additional human chromosomes (1, 7, 9, 10, 13, 14, 15,
18, 21 and 22) as well as a variety of other genetic repeat elements that were already in place before the fusion
occurred.31 The sequences interpreted as ancient centromere are merely repetitive sequences and may actually qualify as
(deregulated) VIGEs.The chimpanzee and human genome projects demonstrated that the fusion did not result in loss of
protein coding genes. Instead, the human locus contains approximately 150,000 additional base pairs not found in
chimpanzee chromosome 12 and 13 (now also known as 2A and 2B). This is remarkable: why would a fusion result
in more DNA? We would rather have expected the opposite: the fusion would have left the fused product with less DNA,
since loss of DNA sequences is easily explained. The fact that humans have a unique 150 kb intervening sequence
indicates it may have been deliberately planned (or designed) into the human genome. It could also be proposed that the
150 kb DNA sequence demarcating the fusion site may have served as a particular kind of VIGE, an adaptor sequence for
bringing the chromosomes together and facilitating the fusion in humans.Another remarkable observation is that in the
fusion region we find an inactivated cobalamin synthetase (CBWD) gene. 32 Cobalamin synthetase is a protein that, in its
active form, has the ability to synthesize vitamin B12 (a crucial cofactor in the biosynthesis of nucleotides, the building
blocks of DNA and RNA molecules). Deficiency during pregnancy and/or early childhood results in severe neurological
defects because of impaired development of the brain. The Darwinian assumption is that the cobalamin synthetase gene
was donated by bacteria a long time ago and afterwards it was inactivated. Nowadays, humans must rely on
microorganisms in the colon as well as dietary intake (a substantial part coming from meat and milk products) for their
vitamin B12 supply. It is also noteworthy that humans have several copies of inactivated cobalamin-synthetase-like genes
on a number of locations in the genome, whereas chimpanzees only have one inactivated cobalamin synthetase gene. That
the fusion must have occurred after man and chimp split is evident from the fact that the fusion is unique to
humans:Because the fused chromosome is unique to humans and is fixed, the fusion must have occurred after the humanchimpanzee split, but before modern humans spread around the world, that is, between 6 and 1 million years ago. 32The
molecular analyses show we are more unique than we ever thought we were, and this is in complete accordance with
creation. Apparently the fusion of two human chromosomes that took place may have been the result of an intricate
rearrangement or activation of repetitive genetic elements after the Fall (as part of, or executors of, the curse following the
Fall) and inactivated the cobalamin synthetase gene. The inactivation of the gene may have reduced peoples longevity in a
similar way as the inactivation of the GULO gene, which is crucial to vitamin C synthesis. 14 Understanding the molecular
properties of human chromosome 2 is no longer problematic if we simply accept that humans, like the great apes, were
originally created with 48 chromosomes. Two of them fused to form chromosome 2 when mankind went through a severe
bottleneck.33 And,
as
argued
above, the fusion was mediated
by VIGEs (see figure 2).
Figure 2. Putative mechanism for
how the human chromosome 2
formed through the fusion of two
ancestral chromosomes p2 and
q2, which are similar to
chimpanzee chromosome 12 and
13). Like the great apes,
originally the human baranome
may
have
contained
48
chromosomes. A) Independent
transposition events may have
led to the integration of a relative
small variation-inducing genetic
element (VIGE). B) Extended
duplication events of the VIGE
may have resulted in rapid
expansion of the region in both
p2 and q2, preparing it to become
an adapter sequence required
for
fusion. C) The
expanded
homologous regions align and
facilitate the fusion of the chromosomes. The fusion region (2q21) and other parts of the modern human genome still shows
the remnants of this catastrophic event that only occurred in humans: the cobalamin synthetase gene was inactivated and
several inactive copies, which are not found in the chimpanzee, scattered throughout the genome. Speculative note: Before
the great flood, and probably shortly after, a balancing dynamics of both 48 and 46 chromosomes may have been present in
the human family. This may explain the two extreme cranial morphologies present in the human fossil record. The Homo
erectus/Neandertal humans may have had a karyotype comprised of 48 chromosomes (non-fused p2 and q2), whereas the
other humans had 46 (fused p2 and q2).
The upside-down world
The p53 protein is a mammalian transcription factor that functions as the main switch controlling whether cells divide or go
into apoptosis (programmed cell death, which is sometimes required for severely damaged cells that may become tumours).

Scientists have long wondered how p53 gained the ability to turn on and off more than 1200 genes related to cell division,
DNA repair and programmed cell death. Without the p53 control system organisms would not function: all life would have
died as bulky tumors.Biologists at the University of California now claim that ancient retroviruses helped p53 to become an
important master gene regulator in primates.34 An RNA virus invaded the genome of our common ancestor, jumped into
hundreds of new positions throughout the human genome and spread numerous copies of repetitive DNA sequences that
allowed p53 to regulate many other genes, the team contends. Studies such as these prompted Darwinians to change their
minds about jumping genetic elements. In other words, a randomly hopping ERV provided the human genome with carefully
regulated decision-making machinery. The idea is beyond reasonable belief. Darwinists tend to mix things up. What really
happened in the human genome is a read-through of polymerase II in a VIGE that was next to a gene that already contained
a binding site for p53. Or maybe the VIGE was excised improperly, taking a bit of a flanking gene containing the p53 binding
site. Next, the modified VIGE amplified, transposed, amplified and so on. That explains this family of transposons. A similar
story can be told for the syncytin gene, which encodes a protein of the mammalian placenta that helps the fertilized egg to
become embedded in the uterus wall. Since syncytin has also been found on a transposable element, 35 mammals are
alleged to have obtained the gene from an RNA virus that infected a mammalian ancestor millions of years ago. It is more
likely, however, that syncytin was captured by a VIGE.In bacteria it is often observed that genes that convey a specific
advantageous character are transmitted via plasmids. Plasmids often contain genes for alternative metabolic routes or
genes that provide resistance to antibiotics, and they replicate independently from the hosts genome. Plasmids easily
shuttle between microorganisms via a DNA uptake-process known as transformation (or horizontal gene transfer). The
uptake of plasmids is regulated and controlled, and is DNA sequence dependent. The result of DNA transformations is rapid
adaptation to, for instance, antibiotics. Likewise, viruses replicate independently from the genomic DNA, leaving many
copies and easily transferring from one organism to another. Viruses are not plasmids, although some viruses may have a
similar function in higher organisms as do plasmids in bacteria: they may be able to aid in rapid adaptations to changing
environments. It has been observed that a virus can indeed transfer an adaptive phenotype. The virus that is present in the
fungus (Curvularia protuberata), can induce heat resistance in tropical panic grass (Dichanthelium lanuginosum), allowing
both organisms to grow at high soil temperatures in Yellowstone National Park. This shows that viruses still provide
strategies for rapid adaptation.Fungal isolates cured of the virus are unable to confer heat tolerance, but heat tolerance is
restored after the virus is reintroduced. The virus-infected fungus confers heat tolerance not only to its native monocot host
but also to a eudicot host, which suggests that the underlying mechanism involves pathways conserved between these two
groups of plants.36In fruit flies, wing pigmentation depends on a gene known as yellow. The gene exists in the genome of all
individual fruit flies, but in some it is not active. By analysing the genetic origin of the spots on fruit fly wings, researchers
have discovered a molecular mechanism that explains how new patterns of pigmentation can emerge. The secret appears
to be specific genetic elements that orchestrate where proteins are used in the construction of an insects body. The
segments do not code for proteins, but rather regulate the nearby gene that specifies the pigmentation. As such, these
regulatory DNA segments qualify as VIGEs. The researchers transferred the regulatory DNA segment from a spotted
species (Drosophila biarmipes) into another species not expressing the spot (D. melanogaster), and attached the regulatory
region to a gene for a fluorescent protein. They found that the fluorescent gene was expressed in the spot-free species in
exactly the same patterns as the yellow gene is expressed in the spotted species. By comparing several spotted and
spotfree species, the scientists established that mutation of a regulatory DNA segment led to the expression of the spotted
trait. They discovered that in the species with spotted wings this regulatory segment has multiple binding sites for a protein
that then activates the yellow gene. Spotless species do not have multiple binding sites. 37 The multiplicity of regulatory DNA
segments may argue for an amplification mechanism or targeted integration of the regulatory sequence. That explains why
the same pattern of pigmentation can emerge independently in distantly related species (Darwins analogous variation). The
observed shuttle function of viruses leads me to pose an intriguing question: Were endogenous retroviruses originally
designed to serve as shuttle-vectors to deliver messages from the soma to the germ-line? If yes, then it would put
Lamarckian evolution in an entirely new perspective.
Discussion
The findings of the new biology demonstrate that mainstream scientists are wrong regarding the idea that transposable
elements are the selfish remnants of ancient invasions by RNA viruses. Instead, RNA viruses originate from transposable
elements that were designed as variation-inducing genetic elements (VIGEs). Created kinds were deliberately frontloaded
with several types of controlled and regulated transposable elements to allow them to rapidly invade and adapt to all corners
and crevices of the earth. Due to the redundant character of VIGEs, their controlled regulation may have readily deteriorated
and some of them may now merely cause havoc. The VIGE hypothesis provides elegant explanations for several biological
observations that may otherwise be difficult to interpret within the creationist framework, including the origin of diseases
(RNA viruses) and chromosome rearrangements. The VIGE hypothesis may be a framework for extended creationist
research programs. Some intriguing question can already be raised.
Were VIGEs intentionally designed to cause diseases? No, they were not. It is conceivable that the transposition and
integration of VIGEs is not entirely random. The transposition of VIGEs may have been originally present in the baranome
as controlled and regulated elements and activated upon intrinsic or external triggers. To induce variation in offspring,
triggers for the transposition of VIGEs could be released during meiosis, when the reproductive cells are being produced.
The emergence of RNA viruses from VIGEs may be a result of the Fall, when we were cut of from the regenerating healing
power of the designer.
Why are some VIGEs located on the exact same position in primates and humans? Each original baranome must
have had a limited number of VIGEs, some of which we still find on the same location in distinct species. In distinct
baranomes, VIGEs may have been located on the exact same positions (the T-zero location), which then explains why some
VIGEs such as ERVs, can be found in the same location in, for instance, primates and humans. In addition, sequencedependent integration of VIGEs may also contribute to this observation.
How could Bdelloid rotifers, a group of strictly asexually reproducing aquatic invertebrates, rapidly form novel
species?Asexual production of progeny, as observed in Bdelloids, is found in over one half of all eukaryotic phyla and is
likely to contribute to adaptive changes, as suggested by recent evidence from both animals and plants. 38 The Bdelloids may
have been derived from pluripotent baranomes containing numerous DNA transposons and retro elements, including active
LTR retrotransposons containing gag,pol, and env-like open reading frames.39 These elements are able to reshuffle the
genomes and facilitate instant variation and speciation.
Do we also observe remnants of DNA viruses in the mammalian genomes? If not, this supports my idea that RNA
viruses emerged from VIGEs, and implies DNA viruses have a different origin; probably, as with the Mimi-virus 40, they
originated from degenerated bacteria.
Why was a class of VIGEs designed with information for protein capsids? The capsid may have been acquired from
the hosts genome or it may have been designed to prevent the RNA molecules from attaching themselves to, or finding,

integrations sites. A very speculative idea may be that these VIGEs were designed to shuttle information from the soma to
the germ-line. One thing is clear, however: creation researchers have loads of work to do.
And then there was life
by Gordon Howard
What is the difference between figure 1
and figure 2? Both are patterns of light
and dark. Both are arrangements of the
same 12 particular shapes in the same
groupings. Both exhibit a complexity of
arrangement. The probability of either
arrangement arising by chance is
similar. Neither arrangement has been
produced by any action of the properties of the material they appear on.But there is a world of difference between the two,
and that difference is equivalent to the difference between the imagined primordial soup1 of non-living chemicals, and a
living cell. This is because a living cell is not a random collection of chemicals, but an incredibly complex machine controlled
by information stored in a computer-like program.The essential difference between the figures is simply that figure 2 carries
information while figure 1 does not. This difference has nothing to do with the material the figures are made of. That is, the
difference cannot be detected by physical means. It is immaterial, existing only in the readers mind, and then only if the
reader speaks English. That is, only if the reader understands the inherent code.Could the arrangement of figure 2 arise by
chance? Yes, but then it would not necessarily carry information. Consider a set of letters randomly selected that made the
pattern I LOVE YOU. It would not actually be carrying the information we might like, because the letter I (for example)
would be just a letter like any other. It would not represent anything, such as the concept of a particular person. There would
be no sender (because it is random), no intended recipient, no code, and therefore no meaningit would just be a pattern
of shapes no more significant than any other.Figure 2 carries information only if the pattern of shapes conforms to an agreed
code; that is, if it is specified by a set of rules, such as the rules of the English language, and represents the concept of
something not physically present. Furthermore, it only carries information if that code can be interpreted by another party or
process, through some decoding machinery in a recipient. In other words, the pattern needs to be filtered through a set of
rules which can then be used to put the information into action. Only then does it become meaningful, because meaning
does not arise from the arrangement, but from the interpretation, or decoding, of the arrangement. That is what happened
when you decoded the pattern in figure 2.While a required arrangement (such as figure 2) might arise by chance, 2 its rules
of interpretation cannot, since the rules for coding and decoding are likewise non-material, an abstraction, and therefore can
only be formulated and understood by an intelligence. 3 Neither can the specification for arriving at the particular
arrangement in the first place arise by chance, again because therules for the specificity (the language that determines the
arrangement of the letters) cannot arise from any property of matter. Thus these rules are also the work of intelligence, or
mind.Information, therefore, cannot arise from inanimate matter by chance.However, a living organism requires information
to function. This is because a living organism requires carefully specified materials and processes, not only for itself, but for
its replication. In fact, reproduction is part of the definition of a living thing. Replication assumes instructions for the process
of building the replicant from scratch, all the while maintaining a functioning organism, and thus needs s