You are on page 1of 30

Of particular interest in our study of the universe is the existence of Great Walls and the Great

Attractor. Thought to be extremely long, yet thin, alignments of galaxies, clusters of galaxies and
superclusters, assembled by large quantities of dark matter, it is thought that they might represent
a geometric structure congruent with the infalling strings from the superverse into the black hole
we are within. These also represent topological defects left over from the early universe, early
connections still maintained, perhaps reflected in ancient black hole wormhole tunnel
entanglements that still link distant parts of the universe which were once adjacent (and still are in
other layers.) This reflects the universe's existence as a quantum particle, subject to quantum
nonlocality. They expand in length as the universe expands, but remain narrow in width and
height. The parallel string-like timelines of the Hopf Fibration universe we are within may also
represent different states of the Universal string we reside upon (time proceeds lengthwise down
the string and space-time is any given point upon it.) The different timeline fibers of the Hopf
Fibration are actually all one universal string that loops around from big bounce to big bounce
(see image.) This is also reflected in the one dimensional mobius strip, which is our universe on
its very basic level, expanding to torus and later to hypersphere (the 4th dimension being time.)
The twist of the mobius and the center of the torus and the hypersphere are where the big bang
occurred. Since time is the fourth dimension of the hypersphere or glome, we exist on its three
dimensional surface, all equidistant from the center, which is where the big bang and bounces
occur. It's quite interesting and no accident that the volume of a torus and the surface area of the
hypersphere are the same, just like how the Hopf Fibration looks very much like a torus. This
should give you an idea of why our universe can exist in all these shapes all across its timelines
(remember that all times exist simultaneously.) As mentioned in Origin 12, this is the region
which flips from black hole to white hole as polarity reverses between the different members of
the quadverse and we are either expanding or contracting, and either accumulating dark energy
or dark matter in a higher ratio to the other. The reason the Great Wall and Great Attractor were
assembled so quickly is precisely because the universe is cyclical and information for their
assemblage isn't lost, but maintained in the cosmic DNA of the universe, which was imparted
upon it by the superverse. This is why it is a fractal representative of the superverse and the
omniverse, as this cosmic DNA gets passed down to all baby universes/quadverses. The other
reason why this happened with such rapidity is that dark matter, only influenced by gravity, can
proceed at faster than c speeds, and therefore experiences negative time (it is actually matter
from the antiverse, just like our matter is their dark matter and experiences negative time there).
The transfer and conversion of matter to dark matter in both directions is what ultimately causes
the balanced cyclical nature of the quadverse, as this transference occurs between the universe
and the antiverse, and the mirrorverse and the antimirroverse. This brings to bear the Aharanov
Effect, and the future postselects the past on the macro level, just as is the case on the quantum
level. Bolstering this idea is the recent possible discovery of the sterile neutrino, which also
experiences time in both directions, because the only force which impacts it is gravity, which
exists in all dimensions, in a looping form (just like time), unlike the other forces, which are limited
strings.

If, as mentioned in Origin 13, our universe evolved from 1D to 2D to 3D and will one day reach
4D, the question then becomes what will be the nature of this fourth dimension? Will it be an
internally manufactured dimension, like the other three spatial dimensions, or will it be
hyperspace, from the omniverse? I believe it will be the latter. Why? Because 3D space is
stable, and once our universe expands to the extent needed to create the fourth dimension, it will
be so cold that it will reach the below absolute zero temps needed to interface with the
omniverse. This, in turn, will cause explosive inflation (inflation phase 2), which will expand the
universe within the omniverse, until it reaches the Cauchy Horizon of the black hole we reside
within, and then it will bounce back just as explosively, and contraction will begin. In inflation
phase 2, no alternate timelines will be manufactured, unlike the first inflation, because when the
universe was 2D there was no gravitation. In 3D space, we have gravitation, so that will keep
new emergent timelines from forming. The situation will be the reverse for the antiverse; when
we go from 3D to 4D space and reach inflation phase 2, hit the Cauchy Horizon and contract
towards our next big bounce with converging timelines, it will be big bouncing and reaching
inflation phase 1 and creating the parallel timelines in 2+1, just prior to phase transitioning to 3+1
(the "dark energy" era.) As it relates to the individual members of the quadverse, both the
universe and the mirrorverse are always in sync, while the antiverse and antimirrorverse are
always in sync also. The reason for this 4 way balance is because our universe was generated
with a small imbalance favoring matter over antimatter and regular matter over mirror matter, the
other components, favor similar imbalances, but going in the other direction.

The relationship between gravity and time is quite interesting. Besides both looping around and
maintaining the cyclical and balanced nature of the universe and quadverse (and omniverse, for
that matter.) Not only is gravity/dark energy responsible for the creation of the third dimension
over entropy/time..... the rate of time also responds to the presence of gravitation. During the 1D
and 2D phases of the universe, when gravity waves do not exist, time proceeds very slowly
(maybe remains static), until the universe cools sufficiently to phase transition to 3D, inflation
occurs with the polarity flip at the central black hole / white hole and the dark energy to dark
matter ratio increases, parallel time lines are created and gravity emerges. In 3D and 4D time
proceeds much more rapidly, since gravity is now present (but separated among the various
parallel timelines) and the universe expands ever more quickly in the dark energy era and even
faster with explosive expansion (inflation phase 2) during the phase change to 4D space. At this
time, to an outside observer, the universe would finally become visible as a 1 dimensional line
expanding from the center towards the event horizon of the parent black hole from both ends.
Before this happened, it was only a point particle (although, since all times exist together,
perhaps it will always appear as a line frozen in the imaginary time dimension of the omniverse--
all the parallel timelines will be superimposed upon each other, in the fashion of quantum
superposition-- as will the other members of the quadverse be visible as single dimensional
lines.) Besides the black hole and superverse properties mentioned in Origin 13 which impact the
physical properties of the baby universe/quadverse, the actual size of the parent black hole and
thus the width of the Cauchy Horizon, will also impact length of each oscillation. Time proceeds
even more quickly when the universe rebounds off the Cauchy Horizon from the parent black hole
(the macro version of the strong force), as the gravitational waves become much more
concentrated as they bounce off of it and time proceeds rapidly as the universe starts to contract,
the polarity at the center black hole / white hole flips, the dark matter to dark energy ratio
increases, until finally the phase transition back to 3D occurs as the universe heats up and keeps
contracting with more dark matter, the time lines merge and gravity increases even more with
deflation until we reach the 1D/2D wall and time slows to a crawl (or stops), gravity disappears
and the universe big bounces at 10 planck lengths and the cycle starts all over again. In the
mirrorverse, this is synchronized, while it occurs in reverse in the antiverse and the
antimirrorverse, where time itself is reversed. Note also, that gravity/time/dark matter/dark energy
is conserved in the quadverse (and the omniverse in general) as an increase in rate or quantity in
one component results in a decrease in one of the other components of the quadverse (actually
it's 2 vs 2) since, after all, the universe and mirrorverse experience time in reverse from the
antiverse and the antimirrorverse. The structure of this quadverse in the omniverse isn't actually
four lines, it's a double double helix (thus my cosmic DNA reference earlier-- yet another example
of fractality!) Gravity and EM create the twists and turns to produce this structure. Consider the
four dimensions to each be base pairs of the cosmic DNA (for a total of 8 D, 6+2), connected to
each other through the central black hole / white hole which keeps reversing polarity at different
phases of the cycle. These wormhole connections are a cosmic fractal representation of the
chemical bonds between the base pairs of the DNA double helix..... we actually have two double
helixes, with the universe and mirrorverse in sync, as is the antiverse and the antimirrorverse,
which all exist within the cosmically fractal 4+1 omniverse (this is exactly why our universe
reaches a limit of 4+1 in its own dimensions before starting to contract.)

Hey that new science discovery might be technicolor!

I love technicolor, so needless to say Im enthusiastic about this if it proves to be correct. I


remember mentioning it way back in Origin 1 last year, as I hoped it would replace Higgs....
hopefully, this is the first step towards supplanting the Higgs Boson.
well chapter one was written about a year ago and in it I mention a theory called technicolor, from
which I theorized that our dimensions and mass emerged from... instead of the Higgs Boson
which is what conventional physics has assumed.... I just think technicolor makes much more
sense and is a much more elegant theory, and it is a structure of reality based on color theory

well basically I analogized the three primary colors to the three spatial dimensions and time as
the background.... and then you can also construct three negative spatial dimensions which are
the represented by the complementary primary colors and a similar complementary time
dimension
Red Green Blue RGB are the additive primaries
the complementary subtractive primaries are
cyan magenta and yellow
black and white represent time and complementary time. The complementary dimensions make
up the antiverse and the antimirrorverse.

it just occurred to me how three dimensions of space are so similar to the three primary colors
and how time could be similar to the background upon which it was built. Note that in QCD, color
charge effect becomes nil outside of the particle..... thus, anyone in the superverse would not see
our dimensions (they have their own dimensions that arise from their own cosmic color charge),
but on the inside, we are subject to them and perceive them as dimensions.

well there are two possibilities


one is just two dimensions
which would be a particle and its complementary
if youve seen a color wheel, you know its the color opposite to it on the color wheel
the other possibility is 3 particles, in which case you have the primary colors
each color represents one third of the charge of the particle
the colors correspond to color charge
you have to imagine it as a rubber band.... within the rubber band they can move freely
but once they reach the edge and start trying to get out, the rubber band becomes tight
and pushes them back in

it is how I also picture the universe.... with gravity taking the place of this force on a universal
scale and the dimensions taking the place of color charge. Once the universe expands to the
cauchy horizon, it "bounces back." Notice the fractal representation of quantum lattice and spin
networks-- this shows that cosmic DNA replicates itself in the baby universe and thus they are
made in the image of the omniverse itself.

This also works with Calabi-Yau manifolds, which are six dimensional, as each manifold would be
constructed of the three additive primary spatial dimensions plus the three subtractive primary
spatial dimensions.

This is the first step towards a gravity-strong force unification, to match electro-weak unification....
so instead of 4 forces, we'd have 2 x 2 (just like the quadverse arrangement..... more fractality!)

Universes with additional dimensions can be created based on this framework by adding in
resonances or higher and lower energy versions that exist on higher or lower energy levels.... like
the electron, muon and tauon, for example. Universes with the same dimensions, like parallel
timeverses and mirrorverses can also exist on different energy levels.

The renowned Kip Thorne has created a solution to the geometry of two colliding black holes
which looks suspiciously like the Hopf Fibration and E8, as well as the toroidal model of the
universe. It is therefore theorized that baby universes can be created when two black holes
collide and merge, resulting in a warping of space and time that creates the shape of the baby
universe that lies within. The physical properties and laws of the universe are determined by the
spin, charge, cosmic color charge (aka dimensions, which may also arise from infalling quark
gluon plasma and cosmic strings) and other properties of the parent black holes as well as the
results of the collision. Both parent black holes provide cosmic DNA which goes into producing
the baby universe. The fractal representation of this also resembles the structure of large
galaxies, therefore it is also theorized that these baby universes are produced at the core of these
galaxies where the supermassive blackhole exists, during the quasar active stage of its life cycle.
As mentioned above, this also shows how quantum lattice and spin networks are replicated
throughout the omniverse, as cosmic DNA and gravity mold not only the structures within
universes, but the omniverse itself.

The interesting thing about the quantum mirror analogy is it reminds me of the "Funhouse
Mirrors" analogy I used in describing parallel time universes-- basically, they are multiple images
of the same thing, distorted by various gravitational effects. But they are really reflections of the
same universe in superpositional states with itself.

Physicists discover new way to visualize warped space and time


April 11th, 2011 in Physics / General Physics

Enlarge

Two doughnut-shaped vortexes ejected by a pulsating black hole. Also shown at the center are
two red and two blue vortex lines attached to the hole, which will be ejected as a third doughnut-
shaped vortex in the next pulsation. Credit: The Caltech/Cornell SXS Collaboration

(PhysOrg.com) -- When black holes slam into each other, the surrounding space and time surge
and undulate like a heaving sea during a storm. This warping of space and time is so complicated
that physicists haven't been able to understand the details of what goes on -- until now.

"We've found ways to visualize warped space-time like never before," says Kip Thorne, Feynman
Professor of Theoretical Physics, Emeritus, at the California Institute of Technology (Caltech).

By combining theory with computer simulations, Thorne and his colleagues at Caltech, Cornell
University, and the National Institute for Theoretical Physics in South Africa have developed
conceptual tools they've dubbed tendex lines and vortex lines.

Using these tools, they have discovered that black-hole collisions can produce vortex lines that
form a doughnut-shaped pattern, flying away from the merged black hole like smoke rings. The
researchers also found that these bundles of vortex lines—called vortexes—can spiral out of the
black hole like water from a rotating sprinkler.

The researchers explain tendex and vortex lines—and their implications for black holes—in a
paper that's published online on April 11 in the journal Physical Review Letters.

These are two spiral-shaped vortexes (yellow) of whirling space sticking out of a black hole, and
the vortex lines (red curves) that form the vortexes. Credit: The Caltech/Cornell SXS
Collaboration
Tendex and vortex lines describe the gravitational forces caused by warped space-time. They are
analogous to the electric and magnetic field lines that describe electric and magnetic forces.
Tendex lines describe the stretching force that warped space-time exerts on everything it
encounters. "Tendex lines sticking out of the moon raise the tides on the earth's oceans," says
David Nichols, the Caltech graduate student who coined the term "tendex." The stretching force
of these lines would rip apart an astronaut who falls into a black hole.

Vortex lines, on the other hand, describe the twisting of space. If an astronaut's body is aligned
with a vortex line, she gets wrung like a wet towel.

When many tendex lines are bunched together, they create a region of strong stretching called a
tendex. Similarly, a bundle of vortex lines creates a whirling region of space called a vortex.
"Anything that falls into a vortex gets spun around and around," says Dr. Robert Owen of Cornell
University, the lead author of the paper.

Tendex and vortex lines provide a powerful new way to understand black holes, gravity, and the
nature of the universe. "Using these tools, we can now make much better sense of the
tremendous amount of data that's produced in our computer simulations," says Dr. Mark Scheel,
a senior researcher at Caltech and leader of the team's simulation work.

Using computer simulations, the researchers have discovered that two spinning black holes
crashing into each other produce several vortexes and several tendexes. If the collision is head-
on, the merged hole ejects vortexes as doughnut-shaped regions of whirling space, and it ejects
tendexes as doughnut-shaped regions of stretching. But if the black holes spiral in toward each
other before merging, their vortexes and tendexes spiral out of the merged hole. In either case—
doughnut or spiral—the outward-moving vortexes and tendexes become gravitational waves—the
kinds of waves that the Caltech-led Laser Interferometer Gravitational-Wave Observatory (LIGO)
seeks to detect.

"With these tendexes and vortexes, we may be able to much more easily predict the waveforms
of the gravitational waves that LIGO is searching for," says Yanbei Chen, associate professor of
physics at Caltech and the leader of the team's theoretical efforts.

Additionally, tendexes and vortexes have allowed the researchers to solve the mystery behind the
gravitational kick of a merged black hole at the center of a galaxy. In 2007, a team at the
University of Texas in Brownsville, led by Professor Manuela Campanelli, used computer
simulations to discover that colliding black holes can produce a directed burst of gravitational
waves that causes the merged black hole to recoil—like a rifle firing a bullet. The recoil is so
strong that it can throw the merged hole out of its galaxy. But nobody understood how this
directed burst of gravitational waves is produced.

Now, equipped with their new tools, Thorne's team has found the answer. On one side of the
black hole, the gravitational waves from the spiraling vortexes add together with the waves from
the spiraling tendexes. On the other side, the vortex and tendex waves cancel each other out.
The result is a burst of waves in one direction, causing the merged hole to recoil.

"Though we've developed these tools for black-hole collisions, they can be applied wherever
space-time is warped," says Dr. Geoffrey Lovelace, a member of the team from Cornell. "For
instance, I expect that people will apply vortex and tendex lines to cosmology, to black holes
ripping stars apart, and to the singularities that live inside black holes. They'll become standard
tools throughout general relativity."

The team is already preparing multiple follow-up papers with new results. "I've never before
coauthored a paper where essentially everything is new," says Thorne, who has authored
hundreds of articles. "But that's the case here."

More information: Physical Review Letters paper: "Frame-dragging vortexes and tidal tendexes
attached to colliding black holes: Visualizing the curvature of spacetime"
Provided by California Institute of Technology

"Physicists discover new way to visualize warped space and time." April 11th, 2011.
http://www.physorg.com/news/2011-04-physicists-visualize-warped-space.html

Atom and its quantum mirror image


April 5, 2011 By Florian Aigner

Enlarge

Towards the mirror or away from the mirror? Physicists create atoms in quantum superposition
states.

A team of physicists experimentally produces quantum-superpositions, simply using a mirror.


Standing in front of a mirror, we can easily tell apart ourselves from our mirror image. The mirror
does not affect our motion in any way. For quantum particles, this is much more complicated. In a
spectacular experiment in the labs of the Heidelberg University, a group of physicists from
Heidelberg Unversity, together with colleagues at TU Munich and TU Vienna extended a
gedanken experiment by Einstein and managed to blur the distinction between a particle and its
mirror image. The results of this experiment have now been published in the journal Nature
Physics.

Emitted Light, Recoiling Atom

When an atom emits light (i.e. a photon) into a particular direction, it recoils in the opposite
direction. If the photon is measured, the motion of the atom is known too. The scientists placed
atoms very closely to a mirror. In this case, there are two possible paths for any photon travelling
to the observer: it could have been emitted directly into the direction of the observer, or it could
have travelled into the opposite direction and then been reflected in the mirror. If there is no way
of distinguishing between these two scenarios, the motion of the atom is not determined, the
atom moves in a superposition of both paths.

“If the distance between the atom and the mirror is very small, it is physically impossible to
distinguish between these two paths,” Jiri Tomkovic, PhD student at Heidelberg explains. The
particle and its mirror image cannot be clearly separated any more. The atom moves towards the
mirror and away from the mirror at the same time. This may sound paradoxical and it is certainly
impossible in classical phyiscs for macroscopic objects, but in quantum physics, such
superpositions are a well-known phenomenon. “This uncertainty about the state of the atom does
not mean that the measurement lacks precision”, Jörg Schmiedmayer (TU Vienna) emphasizes.
“It is a fundamental property of quantum physics: The particle is in both of the two possible states
simultaneousely, it is in a superposition.” In the experiment the two motional states of the atom –
one moving towards the mirror and the other moving away from the mirror – are then combined
using Bragg diffraction from a grating made of laser light. Observing interference it can be directly
shown that the atom has indeed been traveling both paths at once.
On Different Paths at the Same Time

This is reminiscent of the famous double-slit experiment, in which a particle hits a plate with two
slits and passes through both slits simultaneously, due to its wave-like quantum mechanical
properties. Einstein already discussed that this can only be possible if there is no way to
determine which path the particle actually chose, not even precise measurements of any tiny
recoil of the double slit plate itself. As soon as there even a theoretically possible way of
determining the path of the particle, the quantum superposition breaks down. “In our case, the
photons play a role similar to the double slit”, Markus Oberthaler (Heidelberg University) explains.
“If the light can, in principle, tell us about the motion of the atom, then the motion is
unambiguously determined. Only when it is fundamentally undecidable, the atom can be in a
superposition state, combining both possibilities.” And this fundamental undecidability is
guaranteed by the mirror which takes up the photon momentum.

Quantum Effect – Using Only a Mirror

Probing under which conditions such quantum-superpositions can be created has become very
important in quantum physics. Jörg Schmiedmayer and Markus Obertaler came up with the idea
for this experiment already a few years ago. “The fascinating thing about this experiment”, the
scientists say, “is the possibility of creating a quantum superposition state, using only a mirror,
without any external fields.” In a very simple and natural way the distinction between the particle
and its mirror image becomes blurred, without complicated operations carried out by the
experimenter.

Provided by Vienna University of Technology

http://www.physorg.com/news/2011-04-atom-quantum-mirror-image.html

http://www.newscient...true&print=true

Home |Physics& Math |Space | News |Back to article


Mystery signal at Fermilab hints at 'technicolour' force

* 19:46 07 April 2011 by Amanda Gefter


* For similar stories, visit the Quantum World and The Large Hadron Collider Topic Guides

Hints of new physics at the Tevatron (Image: Fermilab)

Hints of new physics at the Tevatron (Image: Fermilab)

1 more image

The physics world is buzzing with news of an unexpected sighting at Fermilab's Tevatron collider
in Illinois – a glimpse of an unidentified particle that, should it prove to be real, will radically alter
physicists' prevailing ideas about how nature works and how particles get their mass.

The candidate particle may not belong to the standard model of particle physics, physicists' best
theory for how particles and forces interact. Instead, some say it might be the first hint of a new
force of nature, called technicolour, which would resolve some problems with the standard model
but would leave others unanswered.

The observation was made by Fermilab's CDF experiment, which smashes together protons and
antiprotons 2 million times every second. The data, collected over a span of eight years, looks at
collisions that produce a W boson, the carrier of the weak nuclear force, and a pair of jets of
subatomic particles called quarks.

Physicists predicted that the number of these events – producing a W boson and a pair of jets –
would fall off as the mass of the jet pair increased. But the CDF data showed something strange
(see graph): a bump in the number of events when the mass of the jet pair was about 145 GeV.
Just a fluke?

That suggests that the additional jet pairs were produced by a new particle weighing about 145
GeV. "We expected to see a smooth shape that decreases for increasing values of the mass,"
says CDF team member Pierluigi Catastini of Harvard University in Cambridge, Massachusetts.
"Instead we observe an excess of events concentrated in one region, and it seems to be a bump
– the typical signature of a particle."

Intriguing as it sounds, there is a 1 in 1000 chance that the bump is simply a statistical fluke.
Those odds make it a so-called three-sigma result, falling short of the gold standard for a
discovery – five sigma, or a 1 in a million chance of error. "I've seen three-sigma effects come
and go," says Kenneth Lane of Boston University in Massachusetts. Still, physicists are 99.9 per
cent sure it is not a fluke, so they are understandably anxious to pin down the particle's identity.

Most agree that the mysterious particle is not the long-sought Higgs boson, believed by many to
endow particles with mass. "It's definitely not a Higgs-like object," says Rob Roser, a CDF
spokesperson at Fermilab. If it were, the bump in the data would be 300 times smaller. What's
more, a Higgs particle should most often decay into bottom quarks, which do not seem to make
an appearance in the Fermilab data.
Fifth force

"There's no version of a Higgs in any model that I know of where the production rate would be
this large," says Lane. "It has to be something else." And Lane is confident that he knows exactly
what it is.

Just over 20 years ago, Lane, along with Fermilab physicist Estia Eichten, predicted that
experiments would see just such a signal. Lane and Eichten were working on a theory known as
technicolour, which proposes the existence of a fifth fundamental force in addition to the four
already known: gravity, electromagnetism, and the strong and weak nuclear forces. Technicolour
is very similar to the strong force, which binds quarks together in the nuclei of atoms, only it
operates at much higher energies. It is also able to give particles their mass – rendering the
Higgs boson unnecessary.
The new force comes with a zoo of new particles. Lane and Eichten's model predicted that a
technicolour particle called a technirho would often decay into a W boson and another particle
called a technipion.

In a new paper, Lane, Eichten and Fermilab physicist Adam Martin suggest that a technipion with
a mass of about 160 GeV could be the mysterious particle producing the two jets. "If this is real, I
think people will give up on the idea of looking for the Higgs and begin exploring this rich world of
new particles," Lane says.
Future tests

But if technicolour is correct, it would not be able to resolve all the questions left unanswered by
the standard model. For example, physicists believe that at the high energies found in the early
universe, the fundamental forces of nature were unified into a single superforce. Supersymmetry,
physicists' leading contender for a theory beyond the standard model, paves a way for the forces
to unite at high energies, but technicolour does not.

Figuring out which theory – if either – is right means combing through more heaps of data to
determine if the new signal is real. Budget constraints mean the Tevatron will shut down this year,
but fortunately the CDF team, which made the find, is already "sitting on almost twice the data
that went into this analysis", says Roser. "Over the coming months we will redo the analysis with
double the data."

Meanwhile, DZero, Fermilab's other detector, will analyse its own data to provide independent
corroboration or refutation of the bump. And at CERN's Large Hadron Collider near Geneva,
Switzerland, physicists will soon collect enough data to perform their own search. In their paper,
Lane and his colleagues suggest ways to look for other techniparticles.

"I haven't been sleeping very well for the past six months," says Lane, who found out about the
bump long before the team went public with the result. "If this is what we think it is, it's a whole
new world beyond quarks and leptons. It'll be great! And if it's not, it's not."
Journal reference: arxiv.org/abs/1104.0699

Invariant Mass Distribution of Jet Pairs Produced in Association with a W boson in ppbar
Collisions at sqrt(s) = 1.96 TeV
CDF Collaboration, T. Aaltonen, et al
(Submitted on 4 Apr 2011)
We report a study of the invariant mass distribution of jet pairs produced in association with a W
boson using data collected with the CDF detector which correspond to an integrated luminosity of
4.3 fb^-1. The observed distribution has an excess in the 120-160 GeV/c^2 mass range which is
not described by current theoretical predictions within the statistical and systematic uncertainties.
In this letter we report studies of the properties of this excess. Comments: 8 pages, 2
figures
Subjects: High Energy Physics - Experiment (hep-ex)
Report number: FERMILAB-PUB-11-164-E
Cite as: arXiv:1104.0699v1 [hep-ex]

Submission history
From: Alberto Annovi [view email]
[v1] Mon, 4 Apr 2011 22:08:31 GMT (119kb,D)

http://en.wikipedia....icolor_(physics)

Technicolor theories are models of physics beyond the standard model that address electroweak
symmetry breaking, the mechanism through which elementary particles acquire masses. Early
technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the
strong nuclear force, which inspired their name.

Instead of introducing elementary Higgs bosons, technicolor models hide electroweak symmetry
and generate masses for the W and Z bosons through the dynamics of new gauge interactions.
Although asymptotically free at very high energies, these interactions must become strong and
confining (and hence unobservable) at lower energies that have been experimentally probed.
This dynamical approach is natural and avoids the hierarchy problem of the Standard Model.[1]

In order to produce quark and lepton masses, technicolor has to be "extended" by additional
gauge interactions. Particularly when modelled on QCD, extended technicolor is challenged by
experimental constraints on flavor-changing neutral current and precision electroweak
measurements. It is not known what is the extended technicolor dynamics.

Much technicolor research focuses on exploring strongly-interacting gauge theories other than
QCD, in order to evade some of these challenges. A particularly active framework is "walking"
technicolor, which exhibits nearly-conformal behavior caused by an infrared fixed point with
strength just above that necessary for spontaneous chiral symmetry breaking. Whether walking
can occur and lead to agreement with precision electroweak measurements is being studied
through non-perturbative lattice simulations.[2]

Experiments at the Large Hadron Collider are expected to discover the mechanism responsible
for electroweak symmetry breaking, and will be critical for determining whether the technicolor
framework provides the correct description of nature.

Contents [hide]
1 Introduction
2 Early technicolor
3 Extended technicolor
4 Walking technicolor
4.1 Top quark mass
5 Minimal Walking Models
6 Technicolor on the lattice
7 Technicolor phenomenology
7.1 Precision electroweak tests
7.2 Hadron collider phenomenology
7.3 Dark matter
8 See also
9 References

[edit]
Introduction

The mechanism for the breaking of electroweak gauge symmetry in the Standard Model of
elementary particle interactions remains unknown. The breaking must be spontaneous, meaning
that the underlying theory manifests the symmetry exactly (the gauge-boson fields are massless
in the equations of motion), but the solutions (the ground state and the excited states) do not. In
particular, the physical W and Z gauge bosons become massive. This phenomenon, in which the
W and Z bosons also acquire an extra polarization state, is called the "Higgs mechanism".
Despite the precise agreement of the electroweak theory with experiment at energies accessible
so far, the necessary ingredients for the symmetry breaking remain hidden, yet to be revealed at
higher energies.

The simplest mechanism of electroweak symmetry breaking introduces a single complex field and
predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense
that quantum mechanical fluctuations produce corrections to its mass that lift it to such high
values that it cannot play the role for which it was introduced. Unless the Standard Model breaks
down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate fine-
tuning of parameters.

Technicolor avoids this problem by hypothesizing a new gauge interaction coupled to new
massless fermions. This interaction is asymptotically free at very high energies and becomes
strong and confining as the energy decreases to the electroweak scale of roughly 250 GeV.
These strong forces spontaneously break the massless fermions' chiral symmetries, some of
which are weakly gauged as part of the Standard Model. This is the dynamical version of the
Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W
and Z bosons.

The new strong interaction leads to a host of new composite, short-lived particles at energies
accessible at the Large Hadron Collider (LHC). This framework is natural because there are no
elementary Higgs bosons and, hence, no fine-tuning of parameters. Quark and lepton masses
also break the electroweak gauge symmetries, so they, too, must arise spontaneously. A
mechanism for incorporating this feature is known as extended technicolor. Technicolor and
extended technicolor face a number of phenomenological challenges. Some of them can be
addressed within a class of theories known as walking technicolor.
[edit]
Early technicolor

Technicolor is the name given to the theory of electroweak symmetry breaking by new strong
gauge-interactions whose characteristic energy scale ΛTC is the weak scale itself, ΛTC ≅
FEW ≡ 246 GeV. The guiding principle of technicolor is "naturalness": basic physical
phenomena should not require fine-tuning of the parameters in the Lagrangian that
describes them. What constitutes fine-tuning is to some extent a subjective matter,
but a theory with elementary scalar particles typically is very finely tuned (unless it is
supersymmetric). The quadratic divergence in the scalar's mass requires adjustments
of a part in , where Mbare is the cutoff of the theory, the energy scale at which the
theory changes in some essential way. In the standard electroweak model with Mbare
∼ 1015 GeV (the grand-unification mass scale), and with the Higgs boson mass
Mphysical = 100–500 GeV, the mass is tuned to at least a part in 1025.

By contrast, a natural theory of electroweak symmetry breaking is an asymptotically-


free gauge theory with fermions as the only matter fields. The technicolor gauge
group GTC is often assumed to be SU(NTC). Based on analogy with quantum
chromodynamics (QCD), it is assumed that there are one or more doublets of
massless Dirac "technifermions" transforming vectorially under the same complex
representation of GTC, TiL,R = (Ui,Di)L,R, i = 1,2, … ,Nf/2. Thus, there is a chiral
symmetry of these fermions, e.g., SU(Nf)L ⊗ SU(Nf)R, if they all transform according
the same complex representation of GTC. Continuing the analogy with QCD, the
running gauge coupling αTC(μ) triggers spontaneous chiral symmetry breaking, the
technifermions acquire a dynamical mass, and a number of massless Goldstone
bosons result. If the technifermions transform under [SU(2) ⊗ U(1)]EW as left-handed
doublets and right-handed singlets, three linear combinations of these Goldstone
bosons couple to three of the electroweak gauge currents.

In 1973 Jackiw and Johnson[3] and Cornwall and Norton[4] studied the possibility that
a (non-vectorial) gauge interaction of fermions can break itself; i.e., is strong enough
to form a Goldstone boson coupled to the gauge current. Using Abelian gauge
models, they showed that, if such a Goldstone boson is formed, it is "eaten" by the
Higgs mechanism, becoming the longitudinal component of the now massive gauge
boson. Technically, the polarization function Π(p2) appearing in the gauge boson
propagator, Δμν = (pμ pν/p2 - gμν)/[p2(1 Ð g2 Π(p2))] develops a pole at p2 = 0 with
residue F2, the square of the Goldstone boson's decay constant, and the gauge
boson acquires mass M ≅ g F. In 1973, Weinstein[5] showed that composite
Goldstone bosons whose constituent fermions transform in the “standard” way under
SU(2) ⊗ U(1) generate the weak boson masses

This standard-model relation is achieved with elementary Higgs bosons in


electroweak doublets; it is verified experimentally to better than 1%. Here, g and g′
are SU(2) and U(1) gauge couplings and tanθW = g′/g defines the weak mixing angle.

The important idea of a new strong gauge interaction of massless fermions at the
electroweak scale FEW driving the spontaneous breakdown of its global chiral
symmetry, of which an SU(2) ⊗ U(1) subgroup is weakly gauged, was first proposed
in 1979 by S. Weinberg[6] and L. Susskind.[7] This "technicolor" mechanism is
natural in that no fine-tuning of parameters is necessary.
[edit]
Extended technicolor

Elementary Higgs bosons perform another important task. In the Standard Model,
quarks and leptons are necessarily massless because they transform under SU(2) ⊗
U(1) as left-handed doublets and right-handed singlets. The Higgs doublet couples to
these fermions. When it develops its vacuum expectation value, it transmits this
electroweak breaking to the quarks and leptons, giving them their observed masses.
(In general, electroweak-eigenstate fermions are not mass eigenstates, so this
process also induces the mixing matrices observed in charged-current weak
interactions.)

In technicolor, something else must generate the quark and lepton masses. The only
natural possibility, one avoiding the introduction of elementary scalars, is to enlarge
GTC to allow technifermions to couple to quarks and leptons. This coupling is induced
by gauge bosons of the enlarged group. The picture, then, is that there is a large
"extended technicolor" (ETC) gauge group GETC ⊃ GTC in which technifermions,
quarks, and leptons live in the same representations. At one or more high scales
ΛETC, GETC is broken down to GTC, and quarks and leptons emerge as the TC-singlet
fermions. When αTC(μ) becomes strong at scale ΛTC ≅ FEW, the fermionic
condensate forms. (The condensate is the vacuum expectation value of the
technifermion bilinear . The estimate here is based on naive dimensional analysis of
the quark condensate in QCD, expected to be correct as an order of magnitude.)
Then, the transitions can proceed through the technifermion's dynamical mass by
the emission and reabsorption of ETC bosons whose masses METC ≅ gETC ΛETC are
much greater than ΛTC. The quarks and leptons develop masses given approximately
by

Here, is the technifermion condensate renormalized at the ETC boson mass scale,

where γm(μ) is the anomalous dimension of the technifermion bilinear at the scale μ.
The second estimate in Eq. (2) depends on the assumption that, as happens in QCD,
αTC(μ) becomes weak not far above ΛTC, so that the anomalous dimension γm of is
small there. Extended technicolor was introduced in 1979 by Dimopoulos and
Susskind,[8] and by Eichten and Lane.[9] For a quark of mass mq ≅ 1 GeV, and with
ΛTC ≅ 250 GeV, one estimates ΛETC ≅ 15 TeV. Therefore, assuming that , METC will
be at least this large.

In addition to the ETC proposal for quark and lepton masses, Eichten and Lane
observed that the size of the ETC representations required to generate all quark and
lepton masses suggests that there will be more than one electroweak doublet of
technifermions.[9] If so, there will be more (spontaneously broken) chiral symmetries
and therefore more Goldstone bosons than are eaten by the Higgs mechanism. These
must acquire mass by virtue of the fact that the extra chiral symmetries are also
explicitly broken, by the standard-model interactions and the ETC interactions. These
"pseudo-Goldstone bosons" are called technipions, πT. An application of Dashen's
theorem[10] gives for the ETC contribution to their mass

The second approximation in Eq. (4) assumes that . For FEW ≅ ΛTC ≅ 250 GeV and
ΛETC ≅ 15 TeV, this contribution to MπT is about 50 GeV. Since ETC interactions
generate and the coupling of technipions to quark and lepton pairs, one expects the
couplings to be Higgs-like; i.e., roughly proportional to the masses of the quarks and
leptons. This means that technipions are expected to decay to the heaviest and
pairs allowed.

Perhaps the most important restriction on the ETC framework for quark mass
generation is that ETC interactions are likely to induce flavor-changing neutral
current processes such as μ → e γ, KL → μ e, and |Δ S| = 2 and |Δ B| = 2 interactions
that induce and mixing.[9] The reason is that the algebra of the ETC currents
involved in generation imply and ETC currents which, when written in terms of
fermion mass eigenstates, have no reason to conserve flavor. The strongest
constraint comes from requiring that ETC interactions mediating mixing contribute
less than the Standard Model. This implies an effective ΛETC greater than 1000 TeV.
The actual ΛETC may be reduced somewhat if CKM-like mixing angle factors are
present. If these interactions are CP-violating, as they well may be, the constraint
from the ε-parameter is that the effective ΛETC > 104 TeV. Such huge ETC mass
scales imply tiny quark and lepton masses and ETC contributions to MπT of at most a
few GeV, in conflict with LEP searches for πT at the Z0.

Extended technicolor is a very ambitious proposal, requiring that quark and lepton
masses and mixing angles arise from experimentally accessible interactions. If there
exists a successful model, it would not only predict the masses and mixings of quarks
and leptons (and technipions), it would explain why there are three families of each:
they are the ones that fit into the ETC representations of q, and T. It should not be
surprising that the construction of a successful model has proven to be very difficult.
[edit]
Walking technicolor

Since quark and lepton masses are proportional to the bilinear technifermion
condensate divided by the ETC mass scale squared, their tiny values can be avoided
if the condensate is enhanced above the weak-αTC estimate in Eq. (2), .

During the 1980s, several dynamical mechanisms were advanced to do this. In 1981
Holdom suggested that, if the αTC(μ) evolves to a nontrivial fixed point in the
ultraviolet, with a large positive anomalous dimension γm for , realistic quark and
lepton masses could arise with ΛETC large enough to suppress ETC-induced mixing.
[11] However, no example of a nontrivial ultraviolet fixed point in a four-dimensional
gauge theory has been constructed. In 1985 Holdom analyzed a technicolor theory in
which a “slowly varying” αTC(μ) was envisioned.[12] His focus was to separate the
chiral breaking and confinement scales, but he also noted that such a theory could
enhance and thus allow the ETC scale to be raised. In 1986 Akiba and Yanagida also
considered enhancing quark and lepton masses, by simply assuming that αTC is
constant and strong all the way up to the ETC scale.[13] In the same year Yamawaki,
Bando and Matumoto again imagined an ultraviolet fixed point in a non-
asymptotically free theory to enhance the technifermion condensate.[14]

In 1986 Appelquist, Karabali and Wijewardhana discussed the enhancement of


fermion masses in an asymptotically free technicolor theory with a slowly running, or
“walking”, gauge coupling.[15] The slowness arose from the screening effect of a
large number of technifermions, with the analysis carried out through two-loop
perturbation theory. In 1987 Appelquist and Wijewardhana explored this walking
scenario further.[16] They took the analysis to three loops, noted that the walking
can lead to a power law enhancement of the technifermion condensate, and
estimated the resultant quark, lepton, and technipion masses. The condensate
enhancement arises because the associated technifermion mass decreases slowly,
roughly linearly, as a function of its renormalization scale. This corresponds to the
condensate anomalous dimension γm in Eq. (3) approaching unity (see below).[17]

In the 1990s, the idea emerged more clearly that walking is naturally described by
asymptotically free gauge theories dominated in the infrared by an approximate
fixed point. Unlike the speculative proposal of ultraviolet fixed points, fixed points in
the infrared are known to exist in asymptotically free theories, arising at two loops in
the beta function providing that the fermion count Nf is large enough. This has been
known since the first two-loop computation in 1974 by Caswell.[18] If Nf is close to
the value at which asymptotic freedom is lost, the resultant infrared fixed point is
weak, of parametric order , and reliably accessible in perturbation theory. This weak-
coupling limit was explored by Banks and Zaks in 1982.[19]
The fixed-point coupling αIR becomes stronger as Nf is reduced from . Below some
critical value Nfc the coupling becomes strong enough (> αχ SB) to break
spontaneously the massless technifermions' chiral symmetry. Since the analysis
must typically go beyond two-loop perturbation theory, the definition of the running
coupling αTC(μ), it’s fixed point value αIR, and the strength αχ SB necessary for chiral
symmetry breaking depend on the particular renormalization scheme adopted. For ;
i.e., for Nf just below Nfc, the evolution of αTC(μ) is governed by the infrared fixed
point and it will evolve slowly (walk) for a range of momenta above the breaking
scale ΛTC. To overcome the -suppression of the masses of first and second
generation quarks involved in mixing, this range must extend almost to their ETC
scale, of . Cohen and Georgi argued that γm = 1 is the signal of spontaneous chiral
symmetry breaking, i.e., that γm(αχ SB) = 1.[17] Therefore, in the walking-αTC
region, γm ≅ 1 and, from Eqs. (2) and (3), the light quark masses are enhanced
approximately by METC/ΛTC.

The idea that αTC(μ) walks for a large range of momenta when αIR lies just above αχ
SB was suggested by Lane and Ramana.[20] They made an explicit model, discussed
the walking that ensued, and used it in their discussion of walking technicolor
phenomenology at hadron colliders. This idea was developed in some detail by
Appelquist, Terning and Wijewardhana.[21] Combining a perturbative computation of
the infrared fixed point with an approximation of αχ SB based on the Schwinger-
Dyson equation, they estimated the critical value Nfc and explored the resultant
electroweak physics. Since the 1990s, most discussions of walking technicolor are in
the framework of theories assumed to be dominated in the infrared by an
approximate fixed point. Various models have been explored, some with the
technifermions in the fundamental representation of the gauge group and some
employing higher representations.[22][23][24]

The possibility that the technicolor condensate can be enhanced beyond that
discussed in the walking literature, has also been considered recently by Luty and
Okui under the name "conformal technicolor".[25] They envision an infrared stable
fixed point, but with a very large anomalous dimension for the operator . It remains
to be seen whether this can be realized, for example, in the class of theories
currently being examined using lattice techniques.
[edit]
Top quark mass

The walking enhancement described above may be insufficient to generate the


measured top quark mass, even for an ETC scale as low as a few TeV. However, this
problem could be addressed if the effective four-technifermion coupling resulting
from ETC gauge boson exchange is strong and tuned just above a critical value.[26]
The analysis of this strong-ETC possibility is that of a Nambu–Jona–Lasinio model with
an additional (technicolor) gauge interaction. The technifermion masses are small
compared to the ETC scale (the cutoff on the effective theory), but nearly constant
out to this scale, leading to a large top quark mass. No fully realistic ETC theory for
all quark masses has yet been developed incorporating these ideas. A related study
was carried out by Miransky and Yamawaki.[27] A problem with this approach is that
it involves some degree of parameter fine-tuning, in conflict with technicolor’s
guiding principle of naturalness.

Finally, it should be noted that there is a large body of closely related work in which
ETC does not generate mt. These are the top quark condensate,[28] topcolor and
top-color-assisted technicolor models,[29] in which new strong interactions are
ascribed to the top quark and other third-generation fermions. As with the strong-ETC
scenario described above, all these proposals involve a considerable degree of fine-
tuning of gauge couplings.
[edit]
Minimal Walking Models

In 2004 Francesco Sannino and Kimmo Tuominen proposed technicolor models with
technifermions in higher-dimensional representations of the technicolor gauge group.
[23] They argued that these more "minimal" models required fewer flavors of
technifermions in order to exhibit walking behavior, making it easier to pass precision
electroweak tests.

For example, SU(2) and SU(3) gauge theories may exhibit walking with as few as two
Dirac flavors of fermions in the adjoint or two-index symmetric representation. In
contrast, at least eight flavors of fermions in the fundamental representation of SU(3)
(and possibly SU(2) as well) are required to reach the near-conformal regime.[24]

These results continue to be investigated by various methods, including lattice


simulations discussed below, which have confirmed the near-conformal dynamics of
these minimal walking models. The first comprehensive effective Lagrangian for
minimal walking models, featuring a light composite Higgs, spin-one states, tree-level
unitarity, and consistency with phenomenological constraints was constructed in
2007 by Foadi, Frandsen, Ryttov and Sannino.[30]
[edit]
Technicolor on the lattice

Lattice gauge theory is a non-perturbative method applicable to strongly-interacting


technicolor theories, allowing first-principles exploration of walking and conformal
dynamics. In 2007, Catterall and Sannino used lattice gauge theory to study SU(2)
gauge theories with two flavors of Dirac fermions in the symmetric representation,
[31] finding evidence of conformality that has been confirmed by subsequent studies.
[32]

As of 2010, the situation for SU(3) gauge theory with fermions in the fundamental
representation is not as clear-cut. In 2007, Appelquist, Fleming and Neil reported
evidence that a non-trivial infrared fixed point develops in such theories when there
are twelve flavors, but not when there are eight.[33] While some subsequent studies
confirmed these results, others reported different conclusions, depending on the
lattice methods used, and there is not yet consensus.[34]

Further lattice studies exploring these issues, as well as considering the


consequences of these theories for precision electroweak measurements, are
underway by several research groups.[35]
[edit]
Technicolor phenomenology

Any framework for physics beyond the Standard Model must conform with precision
measurements of the electroweak parameters. Its consequences for physics at
existing and future high-energy hadron colliders, and for the dark matter of the
universe must also be explored.
[edit]
Precision electroweak tests

In 1990, the phenomenological parameters S, T, and U were introduced by Peskin


and Takeuchi to quantify contributions to electroweak radiative corrections from
physics beyond the Standard Model.[36] They have a simple relation to the
parameters of the electroweak chiral Lagrangian.[37][38] The Peskin-Takeuchi
analysis was based on the general formalism for weak radiative corrections
developed by Kennedy, Lynn, Peskin and Stuart,[39] and alternate formulations also
exist.[40]

The S, T, and U-parameters describe corrections to the electroweak gauge boson


propagators from physics Beyond the Standard Model. They can be written in terms
of polarization functions of electroweak currents and their spectral representation as
follows:

where only new, beyond-standard-model physics is included. The quantities are


calculated relative to a minimal Standard Model with some chosen reference mass of
the Higgs boson, taken to range from the experimental lower bound of 117 GeV to
1000 GeV where its width becomes very large.[41] For these parameters to describe
the dominant corrections to the Standard Model, the mass scale of the new physics
must be much greater than MW and MZ, and the coupling of quarks and leptons to
the new particles must be suppressed relative to their coupling to the gauge bosons.
This is the case with technicolor, so long as the lightest technivector mesons, ρT and
aT, are heavier than 200–300 GeV. The S-parameter is sensitive to all new physics at
the TeV scale, while T is a measure of weak-isospin breaking effects. The U-
parameter is generally not useful; most new-physics theories, including technicolor
theories, give negligible contributions to it.

The S and T-parameters are determined by global fit to experimental data including
Z-pole data from LEP at CERN, top quark and W-mass measurements at Fermilab,
and measured levels of atomic parity violation. The resultant bounds on these
parameters are given in the Review of Particle Properties.[41] Assuming U = 0, the S
and T parameters are small and, in fact, consistent with zero:

where the central value corresponds to a Higgs mass of 117 GeV and the correction
to the central value when the Higgs mass is increased to 300 GeV is given in
parentheses. These values place tight restrictions on beyond-standard-model
theories—when the relevant corrections can be reliably computed.

The S parameter estimated in QCD-like technicolor theories is significantly greater


than the experimentally-allowed value.[36][40] The computation was done assuming
that the spectral integral for S is dominated by the lightest ρT and aT resonances, or
by scaling effective Lagrangian parameters from QCD. In walking technicolor,
however, the physics at the TeV scale and beyond must be quite different from that
of QCD-like theories. In particular, the vector and axial-vector spectral functions
cannot be dominated by just the lowest-lying resonances.[42] It is unknown whether
higher energy contributions to are a tower of identifiable ρT and aT states or a
smooth continuum. It has been conjectured that ρT and aT partners could be more
nearly degenerate in walking theories (approximate parity doubling), reducing their
contribution to S.[43] Lattice calculations are underway or planned to test these
ideas and obtain reliable estimates of S in walking theories.[2][44]

The restriction on the T-parameter poses a problem for the generation of the top-
quark mass in the ETC framework. The enhancement from walking can allow the
associated ETC scale to be as large as a few TeV,[21] but—since the ETC interactions
must be strongly weak-isospin breaking to allow for the large top-bottom mass
splitting—the contribution to the T parameter,[45] as well as the rate for the decay ,
[46] could be too large.
[edit]
Hadron collider phenomenology

Early studies generally assumed the existence of just one electroweak doublet of
technifermions, or one techni-family including one doublet each of color-triplet
techniquarks and color-singlet technileptons.[47] In the minimal, one-doublet model,
three Goldstone bosons (technipions, πT) have decay constant F = FEW = 246 GeV
and are eaten by the electroweak gauge bosons. The most accessible collider signal
is the production through annihilation in a hadron collider of spin-one , and their
subsequent decay into a pair of longitudinally-polarized weak bosons, and . At an
expected mass of 1.5–2.0 TeV and width of 300–400 GeV, such ρT's would be difficult
to discover at the LHC. A one-family model has a large number of physical
technipions, with F = FEW/√4 = 123 GeV.[48] There is a collection of correspondingly
lower-mass color-singlet and octet technivectors decaying into technipion pairs. The
πT's are expected to decay to the heaviest possible quark and lepton pairs. Despite
their lower masses, the ρT's are wider than in the minimal model and the
backgrounds to the πT decays are likely to be insurmountable at a hadron collider.

This picture changed with the advent of walking technicolor. A walking gauge
coupling occurs if αχ SB lies just below the IR fixed point value αIR, which requires
either a large number of electroweak doublets in the fundamental representation of
the gauge group, e.g., or a few doublets in higher-dimensional TC representations.
[22][49] In the latter case, the constraints on ETC representations generally imply
other technifermions in the fundamental representation as well.[9][20] In either case,
there are technipions πT with decay constant . This implies so that the lightest
technivectors accessible at the LHC—ρT, ωT, aT (with IG JPC = 1+ 1−−, 0− 1−−, 1−
1++)—have masses well below a TeV. The class of theories with many
technifermions and thus is called low-scale technicolor.[50]

A second consequence of walking technicolor concerns the decays of the spin-one


technihadrons. Since technipion masses (see Eq. (4)), walking enhances them much
more than it does other technihadron masses. Thus, it is very likely that the lightest
MρT < 2MπT and that the two and three-πT decay channels of the light technivectors
are closed.[22] This further implies that these technivectors are very narrow. Their
most probable two-body channels are , WL WL, γ πT and γ WL. The coupling of the
lightest technivectors to WL is proportional to F/FEW.[51] Thus, all their decay rates
are suppressed by powers of or the fine-structure constant, giving total widths of a
few GeV (for ρT) to a few tenths of a GeV (for ωT and T).

A more speculative consequence of walking technicolor is motivated by consideration


of its contribution to the S-parameter. As noted above, the usual assumptions made
to estimate STC are invalid in a walking theory. In particular, the spectral integrals
used to evaluate STC cannot be dominated by just the lowest-lying ρT and aT and, if
STC is to be small, the masses and weak-current couplings of the ρT and aT could be
more nearly equal than they are in QCD.

Low-scale technicolor phenomenology, including the possibility of a more parity-


doubled spectrum, has been developed into a set of rules and decay amplitudes.[51]
An April 2011 announcement of an excess in jet pairs produced in association with a
W boson measured at the Tevatron[52] has been interpreted by Eichten, Lane and
Martin as a possible signal of the technipion of low-scale technicolor.[53]

The general scheme of low-scale technicolor makes little sense if the limit on is
pushed past about 700 GeV. The LHC should be able to discover it or rule it out.
Searches there involving decays to technipions and thence to heavy quark jets are
hampered by backgrounds from production; its rate is 100 times larger than that at
the Tevatron. Consequently, the discovery of low-scale technicolor at the LHC relies
on all-leptonic final-state channels with favorable signal-to-background ratios: , and .
[54]
[edit]
Dark matter

Technicolor theories naturally contain dark matter candidates. Almost certainly,


models can be built in which the lowest-lying technibaryon, a technicolor-singlet
bound state of technifermions, is stable enough to survive the evolution of the
universe.[41][55] If the technicolor theory is low-scale (), the baryon's mass should
be no more than 1–2 TeV. If not, it could be much heavier. The technibaryon must be
electrically neutral and satisfy constraints on its abundance. Given the limits on spin-
independent dark-matter-nucleon cross sections from dark-matter search
experiments ( for the masses of interest[56]), it may have to be electroweak neutral
(weak isospin I = 0) as well. These considerations suggest that the "old" technicolor
dark matter candidates may be difficult to produce at the LHC.

A different class of technicolor dark matter candidates light enough to be accessible


at the LHC was introduced by Francesco Sannino and his collaborators.[57] These
states are pseudo Goldstone bosons possessing a global charge that makes them
stable against decay.

Topcolor
From Wikipedia, the free encyclopedia

In theoretical physics, Topcolor is a model of dynamical electroweak symmetry


breaking in which the top quark and anti-top quark form a top quark condensate and
act effectively like the Higgs boson. This is analogous to the phenomenon of
superconductivity.

Topcolor naturally involves an extension of the standard model color gauge group to
a product group SU(3)xSU(3)xSU(3)x... One of the gauge groups contains the top and
bottom quarks, and has a sufficiently large coupling constant to cause the
condensate to form. The topcolor model thus anticipates the idea of dimensional
deconstruction and extra space dimensions, as well as the large mass of the top
quark. Topcolor, and its prediction of "topgluons," will be tested in coming
experiments at the Large Hadron Collider at CERN.

Topcolor rescues the Technicolor model from some of its difficulties in a scheme
dubbed "Topcolor-assisted Technicolor."

In particle physics, the top quark condensate theory is an alternative to the Standard
Model in which a fundamental scalar Higgs field is replaced by a composite field
composed of the top quark and its antiquark. These are bound by a four-fermion
interaction, analogous to Cooper pairs in a BCS superconductor and nucleons in the
Nambu-Jona-Lasinio model. The top quark condenses because its measured mass is
approximately 173 GeV (comparable to the electroweak scale), and so its Yukawa
coupling is of order unity, yielding the possibility of strong coupling dynamics.

http://en.wikipedia.org/wiki/Color_confinement
Color confinement
From Wikipedia, the free encyclopedia

The color force favors confinement because at a certain range it is more energetically
favorable to create a quark-antiquark pair than to continue to elongate the color flux
tube. This is analoguous to the behavior of an elongated rubber-band.

Color confinement, often simply called confinement, is the physics phenomenon that
color charged particles (such as quarks) cannot be isolated singularly, and therefore
cannot be directly observed.[1] Quarks, by default, clump together to form groups, or
hadrons. The two types of hadrons are the mesons (one quark, one antiquark) and
the baryons (three quarks). The constituent quarks in a group cannot be separated
from their parent hadron, and this is why quarks can never be studied or observed in
any more direct way than at a hadron level.[2]Contents [hide]
1 Origin
2 Models exhibiting confinement
3 See also
4 References
5 External links

[edit]
Origin

The reasons for quark confinement are somewhat complicated; no analytic proof
exists that quantum chromodynamics should be confining, but intuitively,
confinement is due to the force-carrying gluons having color charge. As any two
electrically-charged particles separate, the electric fields between them diminish
quickly, allowing (for example) electrons to become unbound from atomic nuclei.
However, as two quarks separate, the gluon fields form narrow tubes (or strings) of
color charge, which tend to bring the quarks together as though they were some kind
of rubber band. This is quite different in behavior from electrical charge. Because of
this behavior, the color force experienced by the quarks in the direction to hold them
together, remains constant, regardless of their distance from each other.[3][4]

The color force between quarks is large, even on a macroscopic scale, being on the
order of 100,000 newtons.[citation needed] As discussed above, it is constant, and
does not decrease with increasing distance after a certain point has been passed.

When two quarks become separated, as happens in particle accelerator collisions, at


some point it is more energetically favorable for a new quark–antiquark pair to
spontaneously appear, than to allow the tube to extend further. As a result of this,
when quarks are produced in particle accelerators, instead of seeing the individual
quarks in detectors, scientists see "jets" of many color-neutral particles (mesons and
baryons), clustered together. This process is called hadronization, fragmentation, or
string breaking, and is one of the least understood processes in particle physics.

The confining phase is usually defined by the behavior of the action of the Wilson
loop, which is simply the path in spacetime traced out by a quark–antiquark pair
created at one point and annihilated at another point. In a non-confining theory, the
action of such a loop is proportional to its perimeter. However, in a confining theory,
the action of the loop is instead proportional to its area. Since the area will be
proportional to the separation of the quark–antiquark pair, free quarks are
suppressed. Mesons are allowed in such a picture, since a loop containing another
loop in the opposite direction will have only a small area between the two loops.
[edit]
Models exhibiting confinement

Besides QCD in 4D, another model which exhibits confinement is the Schwinger
model.[citation needed] Compact Abelian gauge theories also exhibit confinement in
2 and 3 spacetime dimensions.[citation needed] Confinement has recently been
found in elementary excitations of magnetic systems called spinons.[5]
[edit]
See also
Quantum chromodynamics
Asymptotic freedom
Deconfining phase
Quantum mechanics
Particle physics
Fundamental force
Dual superconducting model

http://en.wikipedia.org/wiki/Dual_superconducting_model

In the theory of quantum chromodynamics, dual superconductor models attempt to


explain confinement of quarks in terms of an electromagnetic dual theory of
superconductivity.

In an electromagnetic dual theory the roles of electric and magnetic fields are
interchanged. The BCS theory of superconductivity explains superconductivity as the
result of the condensation electric chargers to cooper pairs. In a dual superconductor
an analogous effect occurs through the condensation of magnetic charges (also
called magnetic monopoles). In ordinary electromagnetic theory, no monopoles have
been shown to exist. However, in quantum chromodynamics — the theory of colour
charge which explains the strong interaction between quarks — the colour charges
can be view as (non-abelian) analogues of electric charges and corresponding
magnetic monopoles are known to exist. Dual superconductor models posit that
condensation of these magnetic monopoles in a superconductive state explains
colour confinement — the phenomenon that only neutrally coloured bound states are
observed at low energies.

Qualitatively, confinement in dual superconductor models can be understood as a


result of the dual to the Meissner effect. The Meissner effect says that a
superconducting metal will try to expel magnetic field lines from its interior. If a
magnetic field is forced to run through the superconductor, the field lines are
compressed in magnetic flux tubes. In a dual superconductor the roles of magnetic
and electric fields are exchanged and the Meissner effect tries to expel electric field
lines. Quarks and antiquarks carry opposite colour charges, and for a quark–antiquark
pair 'electric' field lines run from the quark to the antiquark. If the quark–antiquark
pair are immersed in a dual superconductor, then the electric field lines get
compressed to a flux tube. The energy associated to the tube is proportional to its
length, and the potential energy of the quark–antiquark is proportional to their
separation. A quark–antiquark will therefore always bind regardless of their
separation, which explains why no unbound quarks are ever found.[note 1]

Dual superconductors are described by (a dual to) the Landau–Ginzburg model, which
is equivalent to the Abelian Higgs model.

The dual superconductor model is motivated by several observations in calculations


using lattice gauge theory. The model, however, also has some shortcomings. In
particular, although it confines coloured quarks, it fails to confine colour of some
gluons, allowing coloured bound states at energies observable in particle colliders.

http://en.wikipedia.org/wiki/Lattice_gauge_theory

In physics, lattice gauge theory is the study of gauge theories on a spacetime that
has been discretized into a lattice. Gauge theories are important in particle physics,
and include the prevailing theories of elementary particles: quantum
electrodynamics, quantum chromodynamics (QCD) and the Standard Model. Non-
perturbative gauge theory calculations in continuous spacetime formally involve
evaluating an infinite-dimensional path integral, which is computationally intractable.
By working on a discrete spacetime, the path integral becomes finite-dimensional,
and can be evaluated by stochastic simulation techniques such as the Monte Carlo
method. When the size of the lattice is taken infinitely large and its sites
infinitesimally close to each other, the continuum gauge theory is recovered
intuitively. A mathematical proof of this fact is lacking.Contents [hide]
1 Basics
2 Yang–Mills action
3 Measurements
4 Other applications
5 See also
6 Further reading
7 External links
8 References

[edit]
Basics

In lattice gauge theory, the spacetime is Wick rotated into Euclidean space and
discretized into a lattice with sites separated by distance a and connected by links. In
the most commonly-considered cases, such as lattice QCD, fermion fields are defined
at lattice sites (which leads to fermion doubling), while the gauge fields are defined
on the links. That is, an element U of the compact Lie group G is assigned to each
link. Hence to simulate QCD, with Lie group SU(3), there is a 3×3 special unitary matrix
defined on each link. The link is assigned an orientation, with the inverse element corresponding
to the same link with the opposite orientation.
[edit]
Yang–Mills action

The Yang–Mills action is written on the lattice using Wilson loops (named after Kenneth G.
Wilson), so that the limit formally reproduces the original continuum action.[1] Given a faithful
irreducible representation ρ of G, the lattice Yang-Mills action is the sum over all lattice
sites of the (real component of the) trace over the n links e1, ..., en in the Wilson
loop,

Here, χ is the character. If ρ is a real (or pseudoreal) representation, taking the real
component is redundant, because even if the orientation of a Wilson loop is flipped,
its contribution to the action remains unchanged.

There are many possible lattice Yang-Mills actions, depending on which Wilson loops
are used in the action. The simplest "Wilson action" uses only the 1×1 Wilson loop, and
differs from the continuum action by "lattice artifacts" proportional to the small lattice spacing a.
By using more complicated Wilson loops to construct "improved actions", lattice artifacts can be
reduced to be proportional to a2, making computations more accurate.
[edit]
Measurements

Quantities such as particle masses are stochastically calculated using techniques such as the
Monte Carlo method. Gauge field configurations are generated with probabilities proportional to e
− βS, where S is the lattice action and β is related to the lattice spacing a. The
quantity of interest is calculated for each configuration, and averaged. Calculations
are often repeated at different lattice spacings a so that the result can be
extrapolated to the continuum, .

Such calculations are often extremely computationally intensive, and can require the
use of the largest available supercomputers. To reduce the computational burden,
the so-called quenched approximation can be used, in which the fermionic fields are
treated as non-dynamic "frozen" variables. While this was common in early lattice
QCD calculations, "dynamical" fermions are now standard.[2] These simulations
typically utilize algorithms based upon molecular dynamics or microcanonical
ensemble algorithms.[3][4]
[edit]
Other applications

Originally, solvable two-dimensional lattice gauge theories had already been


introduced in 1971 as models with interesting statistical properties by the theorist
Franz Wegner, who worked in the field of phase transitions.[5]

Lattice gauge theory has been shown to be exactly dual to spin foam models
provided that only 1×1 Wilson loops appear in the action.
[edit]
See also
Hamiltonian lattice gauge theory
Lattice field theory
Lattice QCD
Quantum triviality

http://en.wikipedia.org/wiki/Lattice_QCD

Lattice QCD is a well-established non-perturbative approach to solving the quantum


chromodynamics (QCD) theory of quarks and gluons. It is a lattice gauge theory formulated on a
grid or lattice of points in space and time.

Analytic or perturbative solutions in low-energy QCD are hard or impossible due to the highly
nonlinear nature of the strong force. This formulation of QCD in discrete rather than continuous
spacetime naturally introduces a momentum cut off at the order 1/a, where a is the lattice
spacing, which regularizes the theory. As a result lattice QCD is mathematically well-defined.
Most importantly, lattice QCD provides a framework for investigation of non-perturbative
phenomena such as confinement and quark-gluon plasma formation, which are intractable by
means of analytic field theories.

In lattice QCD, fields representing quarks are defined at lattice sites (which leads to fermion
doubling), while the gluon fields are defined on the links connecting neighboring sites. This
approximation approaches continuum QCD as the spacing between lattice sites is reduced to
zero. Because the computational cost of numerical simulations can increase dramatically as the
lattice spacing decreases, results are often extrapolated to a = 0 by repeated calculations at
different lattice spacings a that are large enough to be tractable.

Numerical lattice QCD calculations using Monte Carlo methods can be extremely computationally
intensive, requiring the use of the largest available supercomputers. To reduce the computational
burden, the so-called quenched approximation can be used, in which the quark fields are treated
as non-dynamic "frozen" variables. While this was common in early lattice QCD calculations,
"dynamical" fermions are now standard.[1] These simulations typically utilize algorithms based
upon molecular dynamics or microcanonical ensemble algorithms.[2][3]

At present, lattice QCD is primarily applicable at low densities where the numerical sign problem
does not interfere with calculations. Lattice QCD predicts that confined quarks will become
released to quark-gluon plasma around energies of 170 MeV. Monte Carlo methods are free from
the sign problem when applied to the case of QCD with gauge group SU(2) (QC2D).

Lattice QCD has already made successful contact with many experiments. For example the mass
of the proton has been determined theoretically with an error of less than 2 percent.[4]

Lattice QCD has also been used as a benchmark for high-performance computing, an approach
originally developed in the context of the IBM Blue Gene supercomputer.Contents [hide]
1 Techniques
1.1 Monte-Carlo simulations
1.2 Fermions on the lattice
1.3 Lattice perturbation theory
2 See also
3 Notes
4 Further reading
5 External links

[edit]
Techniques
[edit]
Monte-Carlo simulations

A frame from a Monte-Carlo simulation illustrating the typical four-dimensional structure of gluon-
field configurations used in describing the vacuum properties of QCD.

Monte-Carlo is a method to pseudo-randomly sample a large space of variables. The importance


sampling technique used to select the gauge configurations in the Monte-Carlo simulation
imposes the use of Euclidean time, by a Wick rotation of space-time.

In lattice Monte-Carlo simulations the aim is to calculate correlation functions. This is done by
explicitly calculating the action, using field configurations which are chosen according to the
distribution function, which depends on the action and the fields. Usually one starts with the
gauge bosons part and gauge-fermion interaction part of the action to calculate the gauge
configurations, and then uses the simulated gauge configurations to calculate hadronic
propagators and correlation functions.
[edit]
Fermions on the lattice

Lattice QCD is a way to solve the theory exactly from first principles, without any assumptions, to
the desired precision. However, in practice the calculation power is limited, which requires a
smart use of the available resources. One needs to choose an action which gives the best
physical description of the system, with minimum errors, using the available computational power.
The limited computer resources force one to use physical constants which are different from their
true physical values:
The lattice discretization means a finite lattice spacing and size, which do not exist in the
continuous and infinite space-time. In addition to the automatic error introduced by this, the
limited resources force the use of smaller physical lattices and larger lattice spacing than wanted
in order to minimize errors.
Another unphysical quantity is the quark masses. Quark masses are steadily going down, but to-
date (2010) they are typically too high with respect to the real value.
In order to compensate for the errors one improves the lattice action in various ways, to minimize
mainly finite spacing errors.
[edit]
Lattice perturbation theory

The lattice was initially introduced by Wilson as a framework for studying strongly coupled
theories, such as QCD, non-perturbatively. it was found to be a regularization also suitable for
perturbative calculations. Perturbation theory involves an expansion in the coupling constant, and
is well-justified in high-energy QCD where the coupling constant is small, while it fails completely
when the coupling is large and higher order corrections are larger than lower orders in the
perturbative series. In this region non-perturbative methods, such as Monte-Carlo sampling of the
correlation function, are necessary.

Lattice perturbation theory can also provide results for condensed matter theory. One can use the
lattice to represent the real atomic crystal. In this case the lattice spacing is a real physical value,
and not an artifact of the calculation which has to be removed, and a quantum field theory can be
formulated and solved on the physical lattice.
[edit]
See also
Lattice field theory
Lattice gauge theory
QCD matter
QCD sum rules

http://en.wikipedia.org/wiki/Hamiltonian_lattice_gauge_theory

In physics, Hamiltonian lattice gauge theory is a calculational approach to gauge theory and a
special case of lattice gauge theory in which the space is discretized but time is not. The
Hamiltonian is then re-expressed as a function of degrees of freedom defined on a d-dimensional
lattice.

Following Wilson, the spatial components of the vector potential are replaced with Wilson lines
over the edges, but the time component is associated with the vertices. However, the temporal
gauge is often employed, setting the electric potential to zero. The eigenvalues of the Wilson line
operators U(e) (where e is the (oriented) edge in question) take on values on the Lie group G. It is
assumed that G is compact, otherwise we run into many problems. The conjugate operator to
U(e) is the electric field E(e) whose eigenvalues take on values in the Lie algebra . The
Hamiltonian receives contributions coming from the plaquettes (the magnetic contribution) and
contributions coming from the edges (the electric contribution).

Hamiltonian lattice gauge theory is exactly dual to a theory of spin networks. This involves using
the Peter-Weyl theorem. In the spin network basis, the spin network states are eigenstates of the
operator Tr[E(e)2].

http://en.wikipedia.org/wiki/Asymptotic_freedom

In physics, asymptotic freedom is a property of some gauge theories that causes interactions
between particles to become arbitrarily weak at energy scales that become arbitrarily large, or,
equivalently, at length scales that become arbitrarily small (at the shortest distances).

Asymptotic freedom is a feature of quantum chromodynamics (QCD), the quantum field theory of
the nuclear interaction between quarks and gluons, the fundamental constituents of nuclear
matter. Quarks interact weakly at high energies, allowing perturbative calculations by DGLAP of
cross sections in deep inelastic processes of particle physics; and strongly at low energies,
preventing the unbinding of baryons (like protons or neutrons with three quarks) or mesons (like
pions with two quarks), the composite particles of nuclear matter.

Asymptotic freedom was discovered by Frank Wilczek, David Gross, and David Politzer who in
2004 shared the Nobel Prize in physics.Contents [hide]
1 Discovery
2 Screening and antiscreening
3 Calculating asymptotic freedom
4 See also
5 References

[edit]
Discovery

Asymptotic freedom was discovered in 1973 by David Gross and Frank Wilczek, and by David
Politzer. Although these authors were the first to understand the physical relevance to the strong
interactions, in 1969 Iosif Khriplovich discovered asymptotic freedom in the SU(2) gauge theory
as a mathematical curiosity, and Gerardus 't Hooft in 1972 also noted the effect but did not
publish. For their discovery, Gross, Wilczek and Politzer were awarded the Nobel Prize in Physics
in 2004.

The discovery was instrumental in rehabilitating quantum field theory. Prior to 1973, many
theorists suspected that field theory was fundamentally inconsistent because the interactions
become infinitely strong at short-distances. This phenomenon is usually called a Landau pole,
and it defines the smallest length scale that a theory can describe. This problem was discovered
in field theories of interacting scalars and spinors, including quantum electrodynamics, and
Lehman positivity led many to suspect that it is unavoidable. Asymptotically free theories become
weak at short distances, there is no Landau pole, and these quantum field theories are believed
to be completely consistent down to any length scale.

While the Standard Model is not entirely asymptotically free, in practice the Landau pole can only
be a problem when thinking about the strong interactions. The other interactions are so weak that
any inconsistency can only arise at distances shorter than the Planck length, where a field theory
description is inadequate anyway.
[edit]
Screening and antiscreening

Charge screening in QED

The variation in a physical coupling constant under changes of scale can be understood
qualitatively as coming from the action of the field on virtual particles carrying the relevant charge.
The Landau pole behavior of quantum electrodynamics (QED, related to quantum triviality) is a
consequence of screening by virtual charged particle-antiparticle pairs, such as electron-positron
pairs, in the vacuum. In the vicinity of a charge, the vacuum becomes polarized: virtual particles
of opposing charge are attracted to the charge, and virtual particles of like charge are repelled.
The net effect is to partially cancel out the field at any finite distance. Getting closer and closer to
the central charge, one sees less and less of the effect of the vacuum, and the effective charge
increases.

In QCD the same thing happens with virtual quark-antiquark pairs; they tend to screen the color
charge. However, QCD has an additional wrinkle: its force-carrying particles, the gluons,
themselves carry color charge, and in a different manner. Each gluon carries both a color charge
and an anti-color magnetic moment. The net effect of polarization of virtual gluons in the vacuum
is not to screen the field, but to augment it and affect its color. This is sometimes called
antiscreening. Getting closer to a quark diminishes the antiscreening effect of the surrounding
virtual gluons, so the contribution of this effect would be to weaken the effective charge with
decreasing distance.
Since the virtual quarks and the virtual gluons contribute opposite effects, which effect wins out
depends on the number of different kinds, or flavors, of quark. For standard QCD with three
colors, as long as there are no more than 16 flavors of quark (not counting the antiquarks
separately), antiscreening prevails and the theory is asymptotically free. In fact, there are only 6
known quark flavors.
[edit]
Calculating asymptotic freedom

Asymptotic freedom can be derived by calculating the beta-function describing the variation of the
theory's coupling constant under the renormalization group. For sufficiently short distances or
large exchanges of momentum (which probe short-distance behavior, roughly because of the
inverse relation between a quantum's momentum and De Broglie wavelength), an asymptotically
free theory is amenable to perturbation theory calculations using Feynman diagrams. Such
situations are therefore more theoretically tractable than the long-distance, strong-coupling
behavior also often present in such theories, which is thought to produce confinement.

Calculating the beta-function is a matter of evaluating Feynman diagrams contributing to the


interaction of a quark emitting or absorbing a gluon. In non-abelian gauge theories such as QCD,
the existence of asymptotic freedom depends on the gauge group and number of flavors of
interacting particles. To lowest nontrivial order, the beta-function in an SU(N) gauge theory with nf
kinds of quark-like particle is

where α is the theory's equivalent of the fine-structure constant, g2 / (4π) in the units
favored by particle physicists. If this function is negative, the theory is asymptotically
free. For SU(3), the color charge gauge group of QCD, the theory is therefore
asymptotically free if there are 16 or fewer flavors of quarks.

For SU(3) N = 3, and β1 < 0 gives

http://en.wikipedia.org/wiki/Anomalous_scaling_dimension

In theoretical physics, by anomaly one usually means that the symmetry remains
broken when the symmetry-breaking factor goes to zero. When the symmetry which
is broken is scale invariance, then true power laws usually cannot be found from
dimensional reasoning like in turbulence or quantum field theory. In the latter, the
anomalous scaling dimension of an operator is the contribution of quantum
mechanics to the classical scaling dimension of that operator.

The classical scaling dimension of an operator O is determined by dimensional


analysis from the Lagrangian (in 4 spacetime dimensions this means dimension 1 for
elementary bosonic fields including the vector potentials, 3/2 for elementary
fermionic fields etc.). However if one computes the correlator of two operators of this
type, one often finds logarithmic divergences arising from one-loop Feynman
diagrams. The expansion in the coupling constant has the schematic form

where g is a coupling constant, Δ0 is the classical dimension, and Λ is an ultraviolet


cutoff (the maximal allowed energy in the loop integrals). A is a constant that
appears in the loop diagrams. The expression above may be viewed as a Taylor
expansion of the full quantum dimension.

The term g2A is the anomalous scaling dimension while Δ is the full dimension.
Conformal field theories are typically strongly coupled and the full dimension cannot
be easily calculated by Taylor expansions. The full dimensions in this case are often
called critical exponents. These operators describe conformal bound states with a
continuous mass spectrum.

In particular, 2Δ = d − 2 + η for the critical exponent η for a scalar operator. We


have an anomalous scaling dimension when η ≠ 0.

An anomalous scaling dimension indicates a scale dependent wavefunction


renormalization.

Anomalous scaling appears also in classical physics.

http://www.scientificamerican.com/article.cfm?id=the-amazing-disappearing-neutrino

The Amazing Disappearing Antineutrino

A revised calculation suggests that around 3% of particles have gone missing from nuclear
reactor experiments.

| April 1, 2011 | 10

By Eugenie Samuel Reich of Nature magazine

Neutrinos have long perplexed physicists with their uncanny ability to evade detection, with as
many as two-thirds of the ghostly particles apparently going missing en route from the Sun to
Earth. Now a refined version of an old calculation is causing a stir by suggesting that researchers
have also systematically underestimated the number of the particles' antimatter partners--
antineutrinos--produced by nuclear reactor experiments.

The deficit could be caused by the antineutrinos turning into so-called 'sterile antineutrinos', which
can't be directly detected, and which would be clear evidence for effects beyond the standard
model of particle physics.

In the 1960s, physicist Ray Davis, working deep underground in the Homestake gold mine in
South Dakota, found that the flux of solar neutrinos hitting Earth was a third of that predicted by
calculations of the nuclear reactions in the Sun by theorist John Bahcall. Davis later received a
Nobel prize for his contributions to neutrino astrophysics. That puzzle was considered solved in
2001, when the Sudbury Neutrino Observatory (SNO) in Canada found the missing two-thirds
through an alternative means of detection. The SNO's results were taken as evidence that
neutrinos have a mass, which allows them to oscillate between three flavors: electron, muon and
tau. Davis had only detected the electron neutrinos.

Experiments that measure the rate of antineutrino production from the decay of uranium and
plutonium isotopes have so far produced results roughly consistent with this theory. But the
revised calculation accepted this week by Physical Review D suggests that it's not the whole
story. While waiting for the Double Chooz neutrino experiment in France to become fully
operational, Thierry Lasserre and his colleagues at the French atomic energy commission(CEA)
in Saclay set out to check predictions of the rate of antineutrino production by nuclear reactors.
They repeated a calculation first done in the 1980s by Klaus Schreckenbach at the Technical
University of Munich, using more modern techniques that allowed them to be much more precise.
Their new estimate of the rate of production is around 3% more than previously predicted. This
means that several generations of neutrino and antineutrino experiments have unknowingly
missed a small fraction of the particles. "It was completely a surprise for us," says Lasserre.

Double Chooz consists of two detectors measuring the flux of antineutrinos produced by the
Chooz nuclear power plant in the French Ardennes, one detector about 400 meters away from
the plant and the other 1 kilometer away. The far detector became operational this year.

Stefan Schönert, a neutrino physicist at the Technical University of Munich, says the calculation is
solid, and has been checked with Schreckenbach. "They can reproduce each other's results.
There's no way around this result. It's very solid."

Art McDonald of Queen's University in Kingston, Canada and the SNO says that people have to
look carefully at the calculation, which may itself have a systematic error. But, he adds, "there's
no doubt it would have significance as a physics result if it can be shown with more accuracy."

The result may be pointing to evidence of neutrinos and antineutrinos oscillating into a fourth kind
of neutrino or antineutrino, a so-called 'sterile' version that doesn't interact with ordinary matter,
says Carlo Giunti, a physicist at the University of Turin in Italy. Other experiments have previously
seen evidence for sterile particles, including the Liquid Scintillator Neutrino Detector at Los
Alamos National Laboratory in New Mexico and the Mini Booster Neutrino Experiment, or
MiniBooNE, at Fermilab in Batavia, Illinois, and the search to confirm their existence is a hot area
of physics.

Giunti says that the magnitude of the anomaly uncovered by Lasserre is not statistically
significant on its own, but that it points promisingly in the same direction as another anomaly
found by the SAGE collaboration, which studied neutrinos from a radioactive source at the
Baksan Neutrino Observatory in the Caucasus in 2005. "Before this, there used to be a
contradiction between [reactor and radioactive source] experiments but now they are in
agreement," says Giunti.

Schönert says that one key experiment everyone is waiting for is a measurement showing that
the rate of disappearance of antineutrinos from a source increases with the distance from it. "This
would be the smoking gun," he says.

This article is reproduced with permission from the magazine Nature. The article was first
published on April 1, 2011.

http://arxiv.org/abs/astro-ph/0310571

A Map of the Universe


J. Richard Gott III, Mario Jurić, David Schlegel, Fiona Hoyle, Michael Vogeley, Max
Tegmark, Neta Bahcall, Jon Brinkmann
(Submitted on 20 Oct 2003 (v1), last revised 17 Oct 2005 (this version, v2))
We have produced a new conformal map of the universe illustrating recent
discoveries, ranging from Kuiper belt objects in the Solar system, to the galaxies and
quasars from the Sloan Digital Sky Survey. This map projection, based on the
logarithm map of the complex plane, preserves shapes locally, and yet is able to
display the entire range of astronomical scales from the Earth's neighborhood to the
cosmic microwave background. The conformal nature of the projection, preserving
shapes locally, may be of particular use for analyzing large scale structure. Prominent
in the map is a Sloan Great Wall of galaxies 1.37 billion light years long, 80% longer
than the Great Wall discovered by Geller and Huchra and therefore the largest
observed structure in the universe. Comments: Figure 8, and additional material
accessible on the web at: this http URL
Subjects: Astrophysics (astro-ph)
Journal reference: Astrophys.J.624:463,2005
DOI: 10.1086/428890
Cite as: arXiv:astro-ph/0310571v2

http://www.astro.pri...n.edu/universe/

Logarithmic Maps of the Universe

This website contains figures from "Map of the Universe" e-print, by Gott, Juric et al.
The paper has been published in the Astrophysical Journal (Gott et al., 2005, ApJ,
624, 463), and you can also find the manuscript here (note: Figure 8. of the
manuscript has been published as an inset poster, and has to be downloaded
separately (see below)).

The Great Walls -- Largest Structures in the Universe

“Just as a fish may be barely aware of the medium in which it lives and swims, so the
microstructure of empty space could be far too complex for unaided human brains."

Sir Martin Rees, Astronomer Royal, physicist, Cambridge University


Our known Hubble length universe contains hundreds of millions of galaxies that
have clumped together, forming super clusters and a series of massive walls of
galaxies separated by vast voids of empty space.

Great Wall: The most vast structure ever is a collection of superclusters a billion light
years away extending for 5% the length of the entire observable universe. It is
theorized that such structures as the Great Wall form along and follow web-like
strings of dark matter that dictates the structure of the Universe on the grandest of
scales. Dark matter gravitationally attracts baryonic matter, and it is this normal
matter that astronomers see forming long, thin walls of super-galactic clusters.

If it took God one week to make the Earth, going by mass it would take him two
quintillion years to build this thing -- far longer than science says the universe has
existed, and it's kind of fun to have those two the other way around for a change.
Though He could always omnipotently cheat and say "Let there be a Sloan Great
Wall."

The Great Wall is a massive array of astronomical objects named after the
observations which revealed them, the Sloan Digital Sky Survey. An eight year
project scanned over a quarter of the sky to generate full 3-D maps of almost a
million galaxies. Analysis of these images revealed a huge panel of galaxies 1.37
billion light years long, and even the pedantic-sounding .07 is six hundred and sixty
billion trillion kilometers. This is science precisely measuring made-up sounding
numbers.

Sloane_9: This isn't the only wall out there -- others exist, all with far greater lengths
than width or depth, actual sheets of galaxies forming some of the most impressive
anythings there are. And these walls are only a special class of galactic filaments,
long strings of matter stretched between mind-breaking expanses of emptiness.

Some of these elongated super clusters have formed a series of walls, one after
another, spaced from 500 million to 800 million light years apart, such that in one
direction alone, 13 Great Walls have formed with the inner and outer walls separated
by less than seven billion light years.

Recently, cosmologists have estimated that some of these galactic walls may have
taken from 80 billion to 100 billion, to 150 billion years to form in a direct challenge
to current age estimates of the age of the Universe following the Big Bang.

The huge Sloan Great Wall spans over one billion light years. The Coma cluster
(image above) is one of the largest observed structures in the Universe, containing
over 10,000 galaxies and extending more than 1.37 billion light years in length.

Current theories of "dark energy" and "great attractors" have been developed to
explain why a created universe did not spread out uniformly at the same speed and
in the same spoke-like directions as predicted by theory. But as Sean Carroll of the
Moore Center for Theoretical Cosmology and Physics at Cal Tech is fond of saying,
"We don't have a clue."

Britain’s Astronomer Royal, Lord Rees, says some of the cosmos’s biggest mysteries,
like the Big Bang and even the nature of our own self awareness, might never be
resolved. Rees, who is also President of the Royal Society, says that a correct basic
theory of the universe might be present, but may be just too tough for human beings’
brains to comprehend.

You might also like