You are on page 1of 43

The relationship between gravity and time is quite interesting.

Besides both looping around and


maintaining the cyclical and balanced nature of the universe and quadverse (and omniverse, for
that matter.) Not only is gravity/dark energy responsible for the creation of the third dimension
over entropy/time..... the rate of time also responds to the presence of gravitation. During the 1D
and 2D phases of the universe, when gravity waves do not exist, time proceeds very slowly
(maybe remains static), until the universe cools sufficiently to phase transition to 3D, inflation
occurs with the polarity flip at the central black hole / white hole and the dark energy to dark
matter ratio increases, parallel time lines are created and gravity emerges. In 3D and 4D time
proceeds much more rapidly, since gravity is now present (but separated among the various
parallel timelines) and the universe expands ever more quickly in the dark energy era and even
faster with explosive expansion (inflation phase 2) during the phase change to 4D space. At this
time, to an outside observer, the universe would finally become visible as a 1 dimensional line
expanding from the center towards the event horizon of the parent black hole from both ends.
Before this happened, it was only a point particle (although, since all times exist together,
perhaps it will always appear as a line frozen in the imaginary time dimension of the omniverse--
all the parallel timelines will be superimposed upon each other, in the fashion of quantum
superposition-- as will the other members of the quadverse be visible as single dimensional
lines.) Besides the black hole and superverse properties mentioned in Origin 13 which impact the
physical properties of the baby universe/quadverse, the actual size of the parent black hole and
thus the width of the Cauchy Horizon, will also impact length of each oscillation. Time proceeds
even more quickly when the universe rebounds off the Cauchy Horizon from the parent black hole
(the macro version of the strong force), as the gravitational waves become much more
concentrated as they bounce off of it and time proceeds rapidly as the universe starts to contract,
the polarity at the center black hole / white hole flips, the dark matter to dark energy ratio
increases, until finally the phase transition back to 3D occurs as the universe heats up and keeps
contracting with more dark matter, the time lines merge and gravity increases even more with
deflation until we reach the 1D/2D wall and time slows to a crawl (or stops), gravity disappears
and the universe big bounces at 10 planck lengths and the cycle starts all over again. In the
mirrorverse, this is synchronized, while it occurs in reverse in the antiverse and the
antimirrorverse, where time itself is reversed. Note also, that gravity/time/dark matter/dark energy
is conserved in the quadverse (and the omniverse in general) as an increase in rate or quantity in
one component results in a decrease in one of the other components of the quadverse (actually
it's 2 vs 2) since, after all, the universe and mirrorverse experience time in reverse from the
antiverse and the antimirrorverse. The structure of this quadverse in the omniverse isn't actually
four lines, it's a double double helix (thus my cosmic DNA reference earlier-- yet another example
of fractality!) Gravity and EM create the twists and turns to produce this structure. Consider the
four dimensions to each be base pairs of the cosmic DNA (for a total of 8 D, 6+2), connected to
each other through the central black hole / white hole which keeps reversing polarity at different
phases of the cycle. These wormhole connections are a cosmic fractal representation of the
chemical bonds between the base pairs of the DNA double helix..... we actually have two double
helixes, with the universe and mirrorverse in sync, as is the antiverse and the antimirrorverse,
which all exist within the cosmically fractal 4+1 omniverse (this is exactly why our universe
reaches a limit of 4+1 in its own dimensions before starting to contract.)

Hey that new science discovery might be technicolor!

I love technicolor, so needless to say Im enthusiastic about this if it proves to be correct. I


remember mentioning it way back in Origin 1 last year, as I hoped it would replace Higgs....
hopefully, this is the first step towards supplanting the Higgs Boson.

well chapter one was written about a year ago and in it I mention a theory called technicolor, from
which I theorized that our dimensions and mass emerged from... instead of the Higgs Boson
which is what conventional physics has assumed.... I just think technicolor makes much more
sense and is a much more elegant theory, and it is a structure of reality based on color theory

well basically I analogized the three primary colors to the three spatial dimensions and time as
the background.... and then you can also construct three negative spatial dimensions which are
the represented by the complementary primary colors and a similar complementary time
dimension
Red Green Blue RGB are the additive primaries
the complementary subtractive primaries are
cyan magenta and yellow
black and white represent time and complementary time. The complementary dimensions make
up the antiverse and the antimirrorverse.

it just occurred to me how three dimensions of space are so similar to the three primary colors
and how time could be similar to the background upon which it was built. Note that in QCD, color
charge effect becomes nil outside of the particle..... thus, anyone in the superverse would not see
our dimensions (they have their own dimensions that arise from their own cosmic color charge),
but on the inside, we are subject to them and perceive them as dimensions.

well there are two possibilities


one is just two dimensions
which would be a particle and its complementary
if youve seen a color wheel, you know its the color opposite to it on the color wheel
the other possibility is 3 particles, in which case you have the primary colors
each color represents one third of the charge of the particle
the colors correspond to color charge
you have to imagine it as a rubber band.... within the rubber band they can move freely
but once they reach the edge and start trying to get out, the rubber band becomes tight
and pushes them back in

it is how I also picture the universe.... with gravity taking the place of this force on a universal
scale and the dimensions taking the place of color charge. Once the universe expands to the
cauchy horizon, it "bounces back." Notice the fractal representation of quantum lattice and spin
networks-- this shows that cosmic DNA replicates itself in the baby universe and thus they are
made in the image of the omniverse itself.

This also works with Calabi-Yau manifolds, which are six dimensional, as each manifold would be
constructed of the three additive primary spatial dimensions plus the three subtractive primary
spatial dimensions.

This is the first step towards a gravity-strong force unification, to match electro-weak unification....
so instead of 4 forces, we'd have 2 x 2 (just like the quadverse arrangement..... more fractality!)

Universes with additional dimensions can be created based on this framework by adding in
resonances or higher and lower energy versions that exist on higher or lower energy levels.... like
the electron, muon and tauon, for example. Universes with the same dimensions, like parallel
timeverses and mirrorverses can also exist on different energy levels.

The renowned Kip Thorne has created a solution to the geometry of two colliding black holes
which looks suspiciously like the Hopf Fibration and E8, as well as the toroidal model of the
universe. It is therefore theorized that baby universes can be created when two black holes
collide and merge, resulting in a warping of space and time that creates the shape of the baby
universe that lies within. The physical properties and laws of the universe are determined by the
spin, charge, cosmic color charge (aka dimensions, which may also arise from infalling quark
gluon plasma and cosmic strings) and other properties of the parent black holes as well as the
results of the collision. Both parent black holes provide cosmic DNA which goes into producing
the baby universe. The fractal representation of this also resembles the structure of large
galaxies, therefore it is also theorized that these baby universes are produced at the core of these
galaxies where the supermassive blackhole exists, during the quasar active stage of its life cycle.
As mentioned above, this also shows how quantum lattice and spin networks are replicated
throughout the omniverse, as cosmic DNA and gravity mold not only the structures within
universes, but the omniverse itself. As a matter of fact, the recently discovered magnetic
monopoles represent these fractional cosmic color charges in the early universe.

The interesting thing about the quantum mirror analogy is it reminds me of the "Funhouse
Mirrors" analogy I used in describing parallel time universes-- basically, they are multiple images
of the same thing, distorted by various gravitational effects. But they are really reflections of the
same universe in superpositional states with itself.

BTW if you'll take a look at this diagram again :

http://cdn.physorg.com/newman/gfx/news/hires/2011/physicistsdi.jpg

You'll notice a few things--

First of all, there are criss crossing lines.... I believe these to be parallel time lines, and the reason
they are criss crossing is because it's the universe and mirrorverse entangled with each other and
with each universe's timelines 90 degrees to the other's. They don't actually intersect since the
universe and the mirrorverse (and all other universes) exist on different energy levels. Although
these may be points where wormholes exist and facilitate matter and energy transfer across the
quadverse both in space and in time.

Secondly, you'll notice two large gaps, one between the mobius BB in the middle and the second
later on after the first set of criss crossing lines. It's my belief that this diagram represents not just
one moment of time, but ALL TIME(S).... that is, to a hypothetical observer from outside our
universe, they would "see" all time simultaneously (because all times exist together). If this is the
case, the center mobius represents the BB and the first "gap" represents inflation or 1D to 2D
(which is what caused time to split into separate time lines). The second gap represents another
phase transition, perhaps when the universe went from 2D to 3D and I would call it dark energy
assisted inflation or inflation phase 2. Forces selectively deunify at each phase transition,
beginning with unification at each BB. Gravity does not exist in 2 dimensions, so you'll notice the
timelines are symmetrical. In three dimensions, when gravity and dark energy do appear, you'll
note that the timelines are emergent. The timelines are 1D strings separated by imaginary time
and since there was only one timeline per universe at the BB, there are four mobius strips at the
center. The diagram takes us through both expansion/deunification and contraction/reunification
of each member of the quadverse (and inflation and deflation as timelines emerge and converge.)
The fifth force represents gravity in the baby universe, just like our gravity is the fifth force of the
superverse. Also note that the mobius structure in the middle actually consists of 4 strings-- the
universe, antiverse, mirrorverse and antimirrorverse. The universe and mirrorverse are entangled
and BB together as do the antiverse and antimirrorverse. These represents the 2x2 entangled
strands of cosmic DNA-- the double double helix born from the two colliding black holes (which
likely resulted in a huge ripple effect that not only generated the quadverse, but also a large GRB
that fractally creates galaxies and superclusters in the superverse which create other baby
universes and more GRBs and so on and so forth fractally.) The baby universe is itself a fractal
representation of the two parent black holes colliding (note the inner mobius structure.)

Sort of like infinity, kind of like two mobius strips interlocked

Yes, in the quadverse model and Brian Greene's dual CYM model this would work as two
universes entangled with each other (a universe and its' mirror.) It also looks suspiciously like a
reshaped DNA double helix.

David, do you see how those two "knots" in the middle are entangled.... that could be a center
supermassive black hole / white hole from which the universe(s) expanded. Perhaps they switch
"polarity" and that determines whether expansion or contraction is taking place.

Maybe the multiverse seesaws back and forth, with all the universe expanding and contracting
out an into each other

Yes, the seesawing effect keeps the net energy balance (total energy of the whole system) the
same. That knot in the center corresponds to the mobius "twist."

Physicists discover new way to visualize warped space and time


April 11th, 2011 in Physics / General Physics

Enlarge

Two doughnut-shaped vortexes ejected by a pulsating black hole. Also shown at the center are
two red and two blue vortex lines attached to the hole, which will be ejected as a third doughnut-
shaped vortex in the next pulsation. Credit: The Caltech/Cornell SXS Collaboration

(PhysOrg.com) -- When black holes slam into each other, the surrounding space and time surge
and undulate like a heaving sea during a storm. This warping of space and time is so complicated
that physicists haven't been able to understand the details of what goes on -- until now.

"We've found ways to visualize warped space-time like never before," says Kip Thorne, Feynman
Professor of Theoretical Physics, Emeritus, at the California Institute of Technology (Caltech).

By combining theory with computer simulations, Thorne and his colleagues at Caltech, Cornell
University, and the National Institute for Theoretical Physics in South Africa have developed
conceptual tools they've dubbed tendex lines and vortex lines.

Using these tools, they have discovered that black-hole collisions can produce vortex lines that
form a doughnut-shaped pattern, flying away from the merged black hole like smoke rings. The
researchers also found that these bundles of vortex lines—called vortexes—can spiral out of the
black hole like water from a rotating sprinkler.
The researchers explain tendex and vortex lines—and their implications for black holes—in a
paper that's published online on April 11 in the journal Physical Review Letters.

These are two spiral-shaped vortexes (yellow) of whirling space sticking out of a black hole, and
the vortex lines (red curves) that form the vortexes. Credit: The Caltech/Cornell SXS
Collaboration
Tendex and vortex lines describe the gravitational forces caused by warped space-time. They are
analogous to the electric and magnetic field lines that describe electric and magnetic forces.

Tendex lines describe the stretching force that warped space-time exerts on everything it
encounters. "Tendex lines sticking out of the moon raise the tides on the earth's oceans," says
David Nichols, the Caltech graduate student who coined the term "tendex." The stretching force
of these lines would rip apart an astronaut who falls into a black hole.

Vortex lines, on the other hand, describe the twisting of space. If an astronaut's body is aligned
with a vortex line, she gets wrung like a wet towel.

When many tendex lines are bunched together, they create a region of strong stretching called a
tendex. Similarly, a bundle of vortex lines creates a whirling region of space called a vortex.
"Anything that falls into a vortex gets spun around and around," says Dr. Robert Owen of Cornell
University, the lead author of the paper.

Tendex and vortex lines provide a powerful new way to understand black holes, gravity, and the
nature of the universe. "Using these tools, we can now make much better sense of the
tremendous amount of data that's produced in our computer simulations," says Dr. Mark Scheel,
a senior researcher at Caltech and leader of the team's simulation work.

Using computer simulations, the researchers have discovered that two spinning black holes
crashing into each other produce several vortexes and several tendexes. If the collision is head-
on, the merged hole ejects vortexes as doughnut-shaped regions of whirling space, and it ejects
tendexes as doughnut-shaped regions of stretching. But if the black holes spiral in toward each
other before merging, their vortexes and tendexes spiral out of the merged hole. In either case—
doughnut or spiral—the outward-moving vortexes and tendexes become gravitational waves—the
kinds of waves that the Caltech-led Laser Interferometer Gravitational-Wave Observatory (LIGO)
seeks to detect.

"With these tendexes and vortexes, we may be able to much more easily predict the waveforms
of the gravitational waves that LIGO is searching for," says Yanbei Chen, associate professor of
physics at Caltech and the leader of the team's theoretical efforts.

Additionally, tendexes and vortexes have allowed the researchers to solve the mystery behind the
gravitational kick of a merged black hole at the center of a galaxy. In 2007, a team at the
University of Texas in Brownsville, led by Professor Manuela Campanelli, used computer
simulations to discover that colliding black holes can produce a directed burst of gravitational
waves that causes the merged black hole to recoil—like a rifle firing a bullet. The recoil is so
strong that it can throw the merged hole out of its galaxy. But nobody understood how this
directed burst of gravitational waves is produced.

Now, equipped with their new tools, Thorne's team has found the answer. On one side of the
black hole, the gravitational waves from the spiraling vortexes add together with the waves from
the spiraling tendexes. On the other side, the vortex and tendex waves cancel each other out.
The result is a burst of waves in one direction, causing the merged hole to recoil.
"Though we've developed these tools for black-hole collisions, they can be applied wherever
space-time is warped," says Dr. Geoffrey Lovelace, a member of the team from Cornell. "For
instance, I expect that people will apply vortex and tendex lines to cosmology, to black holes
ripping stars apart, and to the singularities that live inside black holes. They'll become standard
tools throughout general relativity."

The team is already preparing multiple follow-up papers with new results. "I've never before
coauthored a paper where essentially everything is new," says Thorne, who has authored
hundreds of articles. "But that's the case here."

More information: Physical Review Letters paper: "Frame-dragging vortexes and tidal tendexes
attached to colliding black holes: Visualizing the curvature of spacetime"

Provided by California Institute of Technology

"Physicists discover new way to visualize warped space and time." April 11th, 2011.
http://www.physorg.com/news/2011-04-physicists-visualize-warped-space.html

Atom and its quantum mirror image


April 5, 2011 By Florian Aigner

Enlarge

Towards the mirror or away from the mirror? Physicists create atoms in quantum superposition
states.

A team of physicists experimentally produces quantum-superpositions, simply using a mirror.


Standing in front of a mirror, we can easily tell apart ourselves from our mirror image. The mirror
does not affect our motion in any way. For quantum particles, this is much more complicated. In a
spectacular experiment in the labs of the Heidelberg University, a group of physicists from
Heidelberg Unversity, together with colleagues at TU Munich and TU Vienna extended a
gedanken experiment by Einstein and managed to blur the distinction between a particle and its
mirror image. The results of this experiment have now been published in the journal Nature
Physics.

Emitted Light, Recoiling Atom

When an atom emits light (i.e. a photon) into a particular direction, it recoils in the opposite
direction. If the photon is measured, the motion of the atom is known too. The scientists placed
atoms very closely to a mirror. In this case, there are two possible paths for any photon travelling
to the observer: it could have been emitted directly into the direction of the observer, or it could
have travelled into the opposite direction and then been reflected in the mirror. If there is no way
of distinguishing between these two scenarios, the motion of the atom is not determined, the
atom moves in a superposition of both paths.

“If the distance between the atom and the mirror is very small, it is physically impossible to
distinguish between these two paths,” Jiri Tomkovic, PhD student at Heidelberg explains. The
particle and its mirror image cannot be clearly separated any more. The atom moves towards the
mirror and away from the mirror at the same time. This may sound paradoxical and it is certainly
impossible in classical phyiscs for macroscopic objects, but in quantum physics, such
superpositions are a well-known phenomenon. “This uncertainty about the state of the atom does
not mean that the measurement lacks precision”, Jörg Schmiedmayer (TU Vienna) emphasizes.
“It is a fundamental property of quantum physics: The particle is in both of the two possible states
simultaneousely, it is in a superposition.” In the experiment the two motional states of the atom –
one moving towards the mirror and the other moving away from the mirror – are then combined
using Bragg diffraction from a grating made of laser light. Observing interference it can be directly
shown that the atom has indeed been traveling both paths at once.
On Different Paths at the Same Time

This is reminiscent of the famous double-slit experiment, in which a particle hits a plate with two
slits and passes through both slits simultaneously, due to its wave-like quantum mechanical
properties. Einstein already discussed that this can only be possible if there is no way to
determine which path the particle actually chose, not even precise measurements of any tiny
recoil of the double slit plate itself. As soon as there even a theoretically possible way of
determining the path of the particle, the quantum superposition breaks down. “In our case, the
photons play a role similar to the double slit”, Markus Oberthaler (Heidelberg University) explains.
“If the light can, in principle, tell us about the motion of the atom, then the motion is
unambiguously determined. Only when it is fundamentally undecidable, the atom can be in a
superposition state, combining both possibilities.” And this fundamental undecidability is
guaranteed by the mirror which takes up the photon momentum.

Quantum Effect – Using Only a Mirror

Probing under which conditions such quantum-superpositions can be created has become very
important in quantum physics. Jörg Schmiedmayer and Markus Obertaler came up with the idea
for this experiment already a few years ago. “The fascinating thing about this experiment”, the
scientists say, “is the possibility of creating a quantum superposition state, using only a mirror,
without any external fields.” In a very simple and natural way the distinction between the particle
and its mirror image becomes blurred, without complicated operations carried out by the
experimenter.

Provided by Vienna University of Technology

http://www.physorg.com/news/2011-04-atom-quantum-mirror-image.html

http://www.newscient...true&print=true

Home |Physics& Math |Space | News |Back to article


Mystery signal at Fermilab hints at 'technicolour' force

* 19:46 07 April 2011 by Amanda Gefter


* For similar stories, visit the Quantum World and The Large Hadron Collider Topic Guides

Hints of new physics at the Tevatron (Image: Fermilab)

Hints of new physics at the Tevatron (Image: Fermilab)

1 more image

The physics world is buzzing with news of an unexpected sighting at Fermilab's Tevatron collider
in Illinois – a glimpse of an unidentified particle that, should it prove to be real, will radically alter
physicists' prevailing ideas about how nature works and how particles get their mass.

The candidate particle may not belong to the standard model of particle physics, physicists' best
theory for how particles and forces interact. Instead, some say it might be the first hint of a new
force of nature, called technicolour, which would resolve some problems with the standard model
but would leave others unanswered.

The observation was made by Fermilab's CDF experiment, which smashes together protons and
antiprotons 2 million times every second. The data, collected over a span of eight years, looks at
collisions that produce a W boson, the carrier of the weak nuclear force, and a pair of jets of
subatomic particles called quarks.

Physicists predicted that the number of these events – producing a W boson and a pair of jets –
would fall off as the mass of the jet pair increased. But the CDF data showed something strange
(see graph): a bump in the number of events when the mass of the jet pair was about 145 GeV.
Just a fluke?

That suggests that the additional jet pairs were produced by a new particle weighing about 145
GeV. "We expected to see a smooth shape that decreases for increasing values of the mass,"
says CDF team member Pierluigi Catastini of Harvard University in Cambridge, Massachusetts.
"Instead we observe an excess of events concentrated in one region, and it seems to be a bump
– the typical signature of a particle."

Intriguing as it sounds, there is a 1 in 1000 chance that the bump is simply a statistical fluke.
Those odds make it a so-called three-sigma result, falling short of the gold standard for a
discovery – five sigma, or a 1 in a million chance of error. "I've seen three-sigma effects come
and go," says Kenneth Lane of Boston University in Massachusetts. Still, physicists are 99.9 per
cent sure it is not a fluke, so they are understandably anxious to pin down the particle's identity.

Most agree that the mysterious particle is not the long-sought Higgs boson, believed by many to
endow particles with mass. "It's definitely not a Higgs-like object," says Rob Roser, a CDF
spokesperson at Fermilab. If it were, the bump in the data would be 300 times smaller. What's
more, a Higgs particle should most often decay into bottom quarks, which do not seem to make
an appearance in the Fermilab data.
Fifth force

"There's no version of a Higgs in any model that I know of where the production rate would be
this large," says Lane. "It has to be something else." And Lane is confident that he knows exactly
what it is.

Just over 20 years ago, Lane, along with Fermilab physicist Estia Eichten, predicted that
experiments would see just such a signal. Lane and Eichten were working on a theory known as
technicolour, which proposes the existence of a fifth fundamental force in addition to the four
already known: gravity, electromagnetism, and the strong and weak nuclear forces. Technicolour
is very similar to the strong force, which binds quarks together in the nuclei of atoms, only it
operates at much higher energies. It is also able to give particles their mass – rendering the
Higgs boson unnecessary.
The new force comes with a zoo of new particles. Lane and Eichten's model predicted that a
technicolour particle called a technirho would often decay into a W boson and another particle
called a technipion.

In a new paper, Lane, Eichten and Fermilab physicist Adam Martin suggest that a technipion with
a mass of about 160 GeV could be the mysterious particle producing the two jets. "If this is real, I
think people will give up on the idea of looking for the Higgs and begin exploring this rich world of
new particles," Lane says.
Future tests

But if technicolour is correct, it would not be able to resolve all the questions left unanswered by
the standard model. For example, physicists believe that at the high energies found in the early
universe, the fundamental forces of nature were unified into a single superforce. Supersymmetry,
physicists' leading contender for a theory beyond the standard model, paves a way for the forces
to unite at high energies, but technicolour does not.

Figuring out which theory – if either – is right means combing through more heaps of data to
determine if the new signal is real. Budget constraints mean the Tevatron will shut down this year,
but fortunately the CDF team, which made the find, is already "sitting on almost twice the data
that went into this analysis", says Roser. "Over the coming months we will redo the analysis with
double the data."

Meanwhile, DZero, Fermilab's other detector, will analyse its own data to provide independent
corroboration or refutation of the bump. And at CERN's Large Hadron Collider near Geneva,
Switzerland, physicists will soon collect enough data to perform their own search. In their paper,
Lane and his colleagues suggest ways to look for other techniparticles.

"I haven't been sleeping very well for the past six months," says Lane, who found out about the
bump long before the team went public with the result. "If this is what we think it is, it's a whole
new world beyond quarks and leptons. It'll be great! And if it's not, it's not."

Journal reference: arxiv.org/abs/1104.0699

Invariant Mass Distribution of Jet Pairs Produced in Association with a W boson in ppbar
Collisions at sqrt(s) = 1.96 TeV
CDF Collaboration, T. Aaltonen, et al
(Submitted on 4 Apr 2011)
We report a study of the invariant mass distribution of jet pairs produced in association with a W
boson using data collected with the CDF detector which correspond to an integrated luminosity of
4.3 fb^-1. The observed distribution has an excess in the 120-160 GeV/c^2 mass range which is
not described by current theoretical predictions within the statistical and systematic uncertainties.
In this letter we report studies of the properties of this excess. Comments: 8 pages, 2
figures
Subjects: High Energy Physics - Experiment (hep-ex)
Report number: FERMILAB-PUB-11-164-E
Cite as: arXiv:1104.0699v1 [hep-ex]

Submission history
From: Alberto Annovi [view email]
[v1] Mon, 4 Apr 2011 22:08:31 GMT (119kb,D)

http://en.wikipedia....icolor_(physics)

Technicolor theories are models of physics beyond the standard model that address electroweak
symmetry breaking, the mechanism through which elementary particles acquire masses. Early
technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the
strong nuclear force, which inspired their name.

Instead of introducing elementary Higgs bosons, technicolor models hide electroweak symmetry
and generate masses for the W and Z bosons through the dynamics of new gauge interactions.
Although asymptotically free at very high energies, these interactions must become strong and
confining (and hence unobservable) at lower energies that have been experimentally probed.
This dynamical approach is natural and avoids the hierarchy problem of the Standard Model.[1]

In order to produce quark and lepton masses, technicolor has to be "extended" by additional
gauge interactions. Particularly when modelled on QCD, extended technicolor is challenged by
experimental constraints on flavor-changing neutral current and precision electroweak
measurements. It is not known what is the extended technicolor dynamics.

Much technicolor research focuses on exploring strongly-interacting gauge theories other than
QCD, in order to evade some of these challenges. A particularly active framework is "walking"
technicolor, which exhibits nearly-conformal behavior caused by an infrared fixed point with
strength just above that necessary for spontaneous chiral symmetry breaking. Whether walking
can occur and lead to agreement with precision electroweak measurements is being studied
through non-perturbative lattice simulations.[2]

Experiments at the Large Hadron Collider are expected to discover the mechanism responsible
for electroweak symmetry breaking, and will be critical for determining whether the technicolor
framework provides the correct description of nature.

Contents [hide]
1 Introduction
2 Early technicolor
3 Extended technicolor
4 Walking technicolor
4.1 Top quark mass
5 Minimal Walking Models
6 Technicolor on the lattice
7 Technicolor phenomenology
7.1 Precision electroweak tests
7.2 Hadron collider phenomenology
7.3 Dark matter
8 See also
9 References

[edit]
Introduction

The mechanism for the breaking of electroweak gauge symmetry in the Standard Model of
elementary particle interactions remains unknown. The breaking must be spontaneous, meaning
that the underlying theory manifests the symmetry exactly (the gauge-boson fields are massless
in the equations of motion), but the solutions (the ground state and the excited states) do not. In
particular, the physical W and Z gauge bosons become massive. This phenomenon, in which the
W and Z bosons also acquire an extra polarization state, is called the "Higgs mechanism".
Despite the precise agreement of the electroweak theory with experiment at energies accessible
so far, the necessary ingredients for the symmetry breaking remain hidden, yet to be revealed at
higher energies.

The simplest mechanism of electroweak symmetry breaking introduces a single complex field and
predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense
that quantum mechanical fluctuations produce corrections to its mass that lift it to such high
values that it cannot play the role for which it was introduced. Unless the Standard Model breaks
down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate fine-
tuning of parameters.

Technicolor avoids this problem by hypothesizing a new gauge interaction coupled to new
massless fermions. This interaction is asymptotically free at very high energies and becomes
strong and confining as the energy decreases to the electroweak scale of roughly 250 GeV.
These strong forces spontaneously break the massless fermions' chiral symmetries, some of
which are weakly gauged as part of the Standard Model. This is the dynamical version of the
Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W
and Z bosons.

The new strong interaction leads to a host of new composite, short-lived particles at energies
accessible at the Large Hadron Collider (LHC). This framework is natural because there are no
elementary Higgs bosons and, hence, no fine-tuning of parameters. Quark and lepton masses
also break the electroweak gauge symmetries, so they, too, must arise spontaneously. A
mechanism for incorporating this feature is known as extended technicolor. Technicolor and
extended technicolor face a number of phenomenological challenges. Some of them can be
addressed within a class of theories known as walking technicolor.
[edit]
Early technicolor

Technicolor is the name given to the theory of electroweak symmetry breaking by new strong
gauge-interactions whose characteristic energy scale ΛTC is the weak scale itself, ΛTC ≅
FEW ≡ 246 GeV. The guiding principle of technicolor is "naturalness": basic physical
phenomena should not require fine-tuning of the parameters in the Lagrangian that
describes them. What constitutes fine-tuning is to some extent a subjective matter,
but a theory with elementary scalar particles typically is very finely tuned (unless it is
supersymmetric). The quadratic divergence in the scalar's mass requires adjustments
of a part in , where Mbare is the cutoff of the theory, the energy scale at which the
theory changes in some essential way. In the standard electroweak model with Mbare
∼ 1015 GeV (the grand-unification mass scale), and with the Higgs boson mass
Mphysical = 100–500 GeV, the mass is tuned to at least a part in 1025.

By contrast, a natural theory of electroweak symmetry breaking is an asymptotically-


free gauge theory with fermions as the only matter fields. The technicolor gauge
group GTC is often assumed to be SU(NTC). Based on analogy with quantum
chromodynamics (QCD), it is assumed that there are one or more doublets of
massless Dirac "technifermions" transforming vectorially under the same complex
representation of GTC, TiL,R = (Ui,Di)L,R, i = 1,2, … ,Nf/2. Thus, there is a chiral
symmetry of these fermions, e.g., SU(Nf)L ⊗ SU(Nf)R, if they all transform according
the same complex representation of GTC. Continuing the analogy with QCD, the
running gauge coupling αTC(μ) triggers spontaneous chiral symmetry breaking, the
technifermions acquire a dynamical mass, and a number of massless Goldstone
bosons result. If the technifermions transform under [SU(2) ⊗ U(1)]EW as left-handed
doublets and right-handed singlets, three linear combinations of these Goldstone
bosons couple to three of the electroweak gauge currents.

In 1973 Jackiw and Johnson[3] and Cornwall and Norton[4] studied the possibility that
a (non-vectorial) gauge interaction of fermions can break itself; i.e., is strong enough
to form a Goldstone boson coupled to the gauge current. Using Abelian gauge
models, they showed that, if such a Goldstone boson is formed, it is "eaten" by the
Higgs mechanism, becoming the longitudinal component of the now massive gauge
boson. Technically, the polarization function Π(p2) appearing in the gauge boson
propagator, Δμν = (pμ pν/p2 - gμν)/[p2(1 Ð g2 Π(p2))] develops a pole at p2 = 0 with
residue F2, the square of the Goldstone boson's decay constant, and the gauge
boson acquires mass M ≅ g F. In 1973, Weinstein[5] showed that composite
Goldstone bosons whose constituent fermions transform in the “standard” way under
SU(2) ⊗ U(1) generate the weak boson masses

This standard-model relation is achieved with elementary Higgs bosons in


electroweak doublets; it is verified experimentally to better than 1%. Here, g and g′
are SU(2) and U(1) gauge couplings and tanθW = g′/g defines the weak mixing angle.

The important idea of a new strong gauge interaction of massless fermions at the
electroweak scale FEW driving the spontaneous breakdown of its global chiral
symmetry, of which an SU(2) ⊗ U(1) subgroup is weakly gauged, was first proposed
in 1979 by S. Weinberg[6] and L. Susskind.[7] This "technicolor" mechanism is
natural in that no fine-tuning of parameters is necessary.
[edit]
Extended technicolor
Elementary Higgs bosons perform another important task. In the Standard Model,
quarks and leptons are necessarily massless because they transform under SU(2) ⊗
U(1) as left-handed doublets and right-handed singlets. The Higgs doublet couples to
these fermions. When it develops its vacuum expectation value, it transmits this
electroweak breaking to the quarks and leptons, giving them their observed masses.
(In general, electroweak-eigenstate fermions are not mass eigenstates, so this
process also induces the mixing matrices observed in charged-current weak
interactions.)

In technicolor, something else must generate the quark and lepton masses. The only
natural possibility, one avoiding the introduction of elementary scalars, is to enlarge
GTC to allow technifermions to couple to quarks and leptons. This coupling is induced
by gauge bosons of the enlarged group. The picture, then, is that there is a large
"extended technicolor" (ETC) gauge group GETC ⊃ GTC in which technifermions,
quarks, and leptons live in the same representations. At one or more high scales
ΛETC, GETC is broken down to GTC, and quarks and leptons emerge as the TC-singlet
fermions. When αTC(μ) becomes strong at scale ΛTC ≅ FEW, the fermionic
condensate forms. (The condensate is the vacuum expectation value of the
technifermion bilinear . The estimate here is based on naive dimensional analysis of
the quark condensate in QCD, expected to be correct as an order of magnitude.)
Then, the transitions can proceed through the technifermion's dynamical mass by
the emission and reabsorption of ETC bosons whose masses METC ≅ gETC ΛETC are
much greater than ΛTC. The quarks and leptons develop masses given approximately
by

Here, is the technifermion condensate renormalized at the ETC boson mass scale,

where γm(μ) is the anomalous dimension of the technifermion bilinear at the scale μ.
The second estimate in Eq. (2) depends on the assumption that, as happens in QCD,
αTC(μ) becomes weak not far above ΛTC, so that the anomalous dimension γm of is
small there. Extended technicolor was introduced in 1979 by Dimopoulos and
Susskind,[8] and by Eichten and Lane.[9] For a quark of mass mq ≅ 1 GeV, and with
ΛTC ≅ 250 GeV, one estimates ΛETC ≅ 15 TeV. Therefore, assuming that , METC will
be at least this large.

In addition to the ETC proposal for quark and lepton masses, Eichten and Lane
observed that the size of the ETC representations required to generate all quark and
lepton masses suggests that there will be more than one electroweak doublet of
technifermions.[9] If so, there will be more (spontaneously broken) chiral symmetries
and therefore more Goldstone bosons than are eaten by the Higgs mechanism. These
must acquire mass by virtue of the fact that the extra chiral symmetries are also
explicitly broken, by the standard-model interactions and the ETC interactions. These
"pseudo-Goldstone bosons" are called technipions, πT. An application of Dashen's
theorem[10] gives for the ETC contribution to their mass

The second approximation in Eq. (4) assumes that . For FEW ≅ ΛTC ≅ 250 GeV and
ΛETC ≅ 15 TeV, this contribution to MπT is about 50 GeV. Since ETC interactions
generate and the coupling of technipions to quark and lepton pairs, one expects the
couplings to be Higgs-like; i.e., roughly proportional to the masses of the quarks and
leptons. This means that technipions are expected to decay to the heaviest and
pairs allowed.

Perhaps the most important restriction on the ETC framework for quark mass
generation is that ETC interactions are likely to induce flavor-changing neutral
current processes such as μ → e γ, KL → μ e, and |Δ S| = 2 and |Δ B| = 2 interactions
that induce and mixing.[9] The reason is that the algebra of the ETC currents
involved in generation imply and ETC currents which, when written in terms of
fermion mass eigenstates, have no reason to conserve flavor. The strongest
constraint comes from requiring that ETC interactions mediating mixing contribute
less than the Standard Model. This implies an effective ΛETC greater than 1000 TeV.
The actual ΛETC may be reduced somewhat if CKM-like mixing angle factors are
present. If these interactions are CP-violating, as they well may be, the constraint
from the ε-parameter is that the effective ΛETC > 104 TeV. Such huge ETC mass
scales imply tiny quark and lepton masses and ETC contributions to MπT of at most a
few GeV, in conflict with LEP searches for πT at the Z0.

Extended technicolor is a very ambitious proposal, requiring that quark and lepton
masses and mixing angles arise from experimentally accessible interactions. If there
exists a successful model, it would not only predict the masses and mixings of quarks
and leptons (and technipions), it would explain why there are three families of each:
they are the ones that fit into the ETC representations of q, and T. It should not be
surprising that the construction of a successful model has proven to be very difficult.
[edit]
Walking technicolor

Since quark and lepton masses are proportional to the bilinear technifermion
condensate divided by the ETC mass scale squared, their tiny values can be avoided
if the condensate is enhanced above the weak-αTC estimate in Eq. (2), .

During the 1980s, several dynamical mechanisms were advanced to do this. In 1981
Holdom suggested that, if the αTC(μ) evolves to a nontrivial fixed point in the
ultraviolet, with a large positive anomalous dimension γm for , realistic quark and
lepton masses could arise with ΛETC large enough to suppress ETC-induced mixing.
[11] However, no example of a nontrivial ultraviolet fixed point in a four-dimensional
gauge theory has been constructed. In 1985 Holdom analyzed a technicolor theory in
which a “slowly varying” αTC(μ) was envisioned.[12] His focus was to separate the
chiral breaking and confinement scales, but he also noted that such a theory could
enhance and thus allow the ETC scale to be raised. In 1986 Akiba and Yanagida also
considered enhancing quark and lepton masses, by simply assuming that αTC is
constant and strong all the way up to the ETC scale.[13] In the same year Yamawaki,
Bando and Matumoto again imagined an ultraviolet fixed point in a non-
asymptotically free theory to enhance the technifermion condensate.[14]

In 1986 Appelquist, Karabali and Wijewardhana discussed the enhancement of


fermion masses in an asymptotically free technicolor theory with a slowly running, or
“walking”, gauge coupling.[15] The slowness arose from the screening effect of a
large number of technifermions, with the analysis carried out through two-loop
perturbation theory. In 1987 Appelquist and Wijewardhana explored this walking
scenario further.[16] They took the analysis to three loops, noted that the walking
can lead to a power law enhancement of the technifermion condensate, and
estimated the resultant quark, lepton, and technipion masses. The condensate
enhancement arises because the associated technifermion mass decreases slowly,
roughly linearly, as a function of its renormalization scale. This corresponds to the
condensate anomalous dimension γm in Eq. (3) approaching unity (see below).[17]

In the 1990s, the idea emerged more clearly that walking is naturally described by
asymptotically free gauge theories dominated in the infrared by an approximate
fixed point. Unlike the speculative proposal of ultraviolet fixed points, fixed points in
the infrared are known to exist in asymptotically free theories, arising at two loops in
the beta function providing that the fermion count Nf is large enough. This has been
known since the first two-loop computation in 1974 by Caswell.[18] If Nf is close to
the value at which asymptotic freedom is lost, the resultant infrared fixed point is
weak, of parametric order , and reliably accessible in perturbation theory. This weak-
coupling limit was explored by Banks and Zaks in 1982.[19]

The fixed-point coupling αIR becomes stronger as Nf is reduced from . Below some
critical value Nfc the coupling becomes strong enough (> αχ SB) to break
spontaneously the massless technifermions' chiral symmetry. Since the analysis
must typically go beyond two-loop perturbation theory, the definition of the running
coupling αTC(μ), it’s fixed point value αIR, and the strength αχ SB necessary for chiral
symmetry breaking depend on the particular renormalization scheme adopted. For ;
i.e., for Nf just below Nfc, the evolution of αTC(μ) is governed by the infrared fixed
point and it will evolve slowly (walk) for a range of momenta above the breaking
scale ΛTC. To overcome the -suppression of the masses of first and second
generation quarks involved in mixing, this range must extend almost to their ETC
scale, of . Cohen and Georgi argued that γm = 1 is the signal of spontaneous chiral
symmetry breaking, i.e., that γm(αχ SB) = 1.[17] Therefore, in the walking-αTC
region, γm ≅ 1 and, from Eqs. (2) and (3), the light quark masses are enhanced
approximately by METC/ΛTC.

The idea that αTC(μ) walks for a large range of momenta when αIR lies just above αχ
SB was suggested by Lane and Ramana.[20] They made an explicit model, discussed
the walking that ensued, and used it in their discussion of walking technicolor
phenomenology at hadron colliders. This idea was developed in some detail by
Appelquist, Terning and Wijewardhana.[21] Combining a perturbative computation of
the infrared fixed point with an approximation of αχ SB based on the Schwinger-
Dyson equation, they estimated the critical value Nfc and explored the resultant
electroweak physics. Since the 1990s, most discussions of walking technicolor are in
the framework of theories assumed to be dominated in the infrared by an
approximate fixed point. Various models have been explored, some with the
technifermions in the fundamental representation of the gauge group and some
employing higher representations.[22][23][24]

The possibility that the technicolor condensate can be enhanced beyond that
discussed in the walking literature, has also been considered recently by Luty and
Okui under the name "conformal technicolor".[25] They envision an infrared stable
fixed point, but with a very large anomalous dimension for the operator . It remains
to be seen whether this can be realized, for example, in the class of theories
currently being examined using lattice techniques.
[edit]
Top quark mass

The walking enhancement described above may be insufficient to generate the


measured top quark mass, even for an ETC scale as low as a few TeV. However, this
problem could be addressed if the effective four-technifermion coupling resulting
from ETC gauge boson exchange is strong and tuned just above a critical value.[26]
The analysis of this strong-ETC possibility is that of a Nambu–Jona–Lasinio model with
an additional (technicolor) gauge interaction. The technifermion masses are small
compared to the ETC scale (the cutoff on the effective theory), but nearly constant
out to this scale, leading to a large top quark mass. No fully realistic ETC theory for
all quark masses has yet been developed incorporating these ideas. A related study
was carried out by Miransky and Yamawaki.[27] A problem with this approach is that
it involves some degree of parameter fine-tuning, in conflict with technicolor’s
guiding principle of naturalness.

Finally, it should be noted that there is a large body of closely related work in which
ETC does not generate mt. These are the top quark condensate,[28] topcolor and
top-color-assisted technicolor models,[29] in which new strong interactions are
ascribed to the top quark and other third-generation fermions. As with the strong-ETC
scenario described above, all these proposals involve a considerable degree of fine-
tuning of gauge couplings.
[edit]
Minimal Walking Models

In 2004 Francesco Sannino and Kimmo Tuominen proposed technicolor models with
technifermions in higher-dimensional representations of the technicolor gauge group.
[23] They argued that these more "minimal" models required fewer flavors of
technifermions in order to exhibit walking behavior, making it easier to pass precision
electroweak tests.

For example, SU(2) and SU(3) gauge theories may exhibit walking with as few as two
Dirac flavors of fermions in the adjoint or two-index symmetric representation. In
contrast, at least eight flavors of fermions in the fundamental representation of SU(3)
(and possibly SU(2) as well) are required to reach the near-conformal regime.[24]

These results continue to be investigated by various methods, including lattice


simulations discussed below, which have confirmed the near-conformal dynamics of
these minimal walking models. The first comprehensive effective Lagrangian for
minimal walking models, featuring a light composite Higgs, spin-one states, tree-level
unitarity, and consistency with phenomenological constraints was constructed in
2007 by Foadi, Frandsen, Ryttov and Sannino.[30]
[edit]
Technicolor on the lattice

Lattice gauge theory is a non-perturbative method applicable to strongly-interacting


technicolor theories, allowing first-principles exploration of walking and conformal
dynamics. In 2007, Catterall and Sannino used lattice gauge theory to study SU(2)
gauge theories with two flavors of Dirac fermions in the symmetric representation,
[31] finding evidence of conformality that has been confirmed by subsequent studies.
[32]

As of 2010, the situation for SU(3) gauge theory with fermions in the fundamental
representation is not as clear-cut. In 2007, Appelquist, Fleming and Neil reported
evidence that a non-trivial infrared fixed point develops in such theories when there
are twelve flavors, but not when there are eight.[33] While some subsequent studies
confirmed these results, others reported different conclusions, depending on the
lattice methods used, and there is not yet consensus.[34]

Further lattice studies exploring these issues, as well as considering the


consequences of these theories for precision electroweak measurements, are
underway by several research groups.[35]
[edit]
Technicolor phenomenology
Any framework for physics beyond the Standard Model must conform with precision
measurements of the electroweak parameters. Its consequences for physics at
existing and future high-energy hadron colliders, and for the dark matter of the
universe must also be explored.
[edit]
Precision electroweak tests

In 1990, the phenomenological parameters S, T, and U were introduced by Peskin


and Takeuchi to quantify contributions to electroweak radiative corrections from
physics beyond the Standard Model.[36] They have a simple relation to the
parameters of the electroweak chiral Lagrangian.[37][38] The Peskin-Takeuchi
analysis was based on the general formalism for weak radiative corrections
developed by Kennedy, Lynn, Peskin and Stuart,[39] and alternate formulations also
exist.[40]

The S, T, and U-parameters describe corrections to the electroweak gauge boson


propagators from physics Beyond the Standard Model. They can be written in terms
of polarization functions of electroweak currents and their spectral representation as
follows:

where only new, beyond-standard-model physics is included. The quantities are


calculated relative to a minimal Standard Model with some chosen reference mass of
the Higgs boson, taken to range from the experimental lower bound of 117 GeV to
1000 GeV where its width becomes very large.[41] For these parameters to describe
the dominant corrections to the Standard Model, the mass scale of the new physics
must be much greater than MW and MZ, and the coupling of quarks and leptons to
the new particles must be suppressed relative to their coupling to the gauge bosons.
This is the case with technicolor, so long as the lightest technivector mesons, ρT and
aT, are heavier than 200–300 GeV. The S-parameter is sensitive to all new physics at
the TeV scale, while T is a measure of weak-isospin breaking effects. The U-
parameter is generally not useful; most new-physics theories, including technicolor
theories, give negligible contributions to it.

The S and T-parameters are determined by global fit to experimental data including
Z-pole data from LEP at CERN, top quark and W-mass measurements at Fermilab,
and measured levels of atomic parity violation. The resultant bounds on these
parameters are given in the Review of Particle Properties.[41] Assuming U = 0, the S
and T parameters are small and, in fact, consistent with zero:

where the central value corresponds to a Higgs mass of 117 GeV and the correction
to the central value when the Higgs mass is increased to 300 GeV is given in
parentheses. These values place tight restrictions on beyond-standard-model
theories—when the relevant corrections can be reliably computed.

The S parameter estimated in QCD-like technicolor theories is significantly greater


than the experimentally-allowed value.[36][40] The computation was done assuming
that the spectral integral for S is dominated by the lightest ρT and aT resonances, or
by scaling effective Lagrangian parameters from QCD. In walking technicolor,
however, the physics at the TeV scale and beyond must be quite different from that
of QCD-like theories. In particular, the vector and axial-vector spectral functions
cannot be dominated by just the lowest-lying resonances.[42] It is unknown whether
higher energy contributions to are a tower of identifiable ρT and aT states or a
smooth continuum. It has been conjectured that ρT and aT partners could be more
nearly degenerate in walking theories (approximate parity doubling), reducing their
contribution to S.[43] Lattice calculations are underway or planned to test these
ideas and obtain reliable estimates of S in walking theories.[2][44]

The restriction on the T-parameter poses a problem for the generation of the top-
quark mass in the ETC framework. The enhancement from walking can allow the
associated ETC scale to be as large as a few TeV,[21] but—since the ETC interactions
must be strongly weak-isospin breaking to allow for the large top-bottom mass
splitting—the contribution to the T parameter,[45] as well as the rate for the decay ,
[46] could be too large.
[edit]
Hadron collider phenomenology

Early studies generally assumed the existence of just one electroweak doublet of
technifermions, or one techni-family including one doublet each of color-triplet
techniquarks and color-singlet technileptons.[47] In the minimal, one-doublet model,
three Goldstone bosons (technipions, πT) have decay constant F = FEW = 246 GeV
and are eaten by the electroweak gauge bosons. The most accessible collider signal
is the production through annihilation in a hadron collider of spin-one , and their
subsequent decay into a pair of longitudinally-polarized weak bosons, and . At an
expected mass of 1.5–2.0 TeV and width of 300–400 GeV, such ρT's would be difficult
to discover at the LHC. A one-family model has a large number of physical
technipions, with F = FEW/√4 = 123 GeV.[48] There is a collection of correspondingly
lower-mass color-singlet and octet technivectors decaying into technipion pairs. The
πT's are expected to decay to the heaviest possible quark and lepton pairs. Despite
their lower masses, the ρT's are wider than in the minimal model and the
backgrounds to the πT decays are likely to be insurmountable at a hadron collider.

This picture changed with the advent of walking technicolor. A walking gauge
coupling occurs if αχ SB lies just below the IR fixed point value αIR, which requires
either a large number of electroweak doublets in the fundamental representation of
the gauge group, e.g., or a few doublets in higher-dimensional TC representations.
[22][49] In the latter case, the constraints on ETC representations generally imply
other technifermions in the fundamental representation as well.[9][20] In either case,
there are technipions πT with decay constant . This implies so that the lightest
technivectors accessible at the LHC—ρT, ωT, aT (with IG JPC = 1+ 1−−, 0− 1−−, 1−
1++)—have masses well below a TeV. The class of theories with many
technifermions and thus is called low-scale technicolor.[50]

A second consequence of walking technicolor concerns the decays of the spin-one


technihadrons. Since technipion masses (see Eq. (4)), walking enhances them much
more than it does other technihadron masses. Thus, it is very likely that the lightest
MρT < 2MπT and that the two and three-πT decay channels of the light technivectors
are closed.[22] This further implies that these technivectors are very narrow. Their
most probable two-body channels are , WL WL, γ πT and γ WL. The coupling of the
lightest technivectors to WL is proportional to F/FEW.[51] Thus, all their decay rates
are suppressed by powers of or the fine-structure constant, giving total widths of a
few GeV (for ρT) to a few tenths of a GeV (for ωT and T).

A more speculative consequence of walking technicolor is motivated by consideration


of its contribution to the S-parameter. As noted above, the usual assumptions made
to estimate STC are invalid in a walking theory. In particular, the spectral integrals
used to evaluate STC cannot be dominated by just the lowest-lying ρT and aT and, if
STC is to be small, the masses and weak-current couplings of the ρT and aT could be
more nearly equal than they are in QCD.

Low-scale technicolor phenomenology, including the possibility of a more parity-


doubled spectrum, has been developed into a set of rules and decay amplitudes.[51]
An April 2011 announcement of an excess in jet pairs produced in association with a
W boson measured at the Tevatron[52] has been interpreted by Eichten, Lane and
Martin as a possible signal of the technipion of low-scale technicolor.[53]

The general scheme of low-scale technicolor makes little sense if the limit on is
pushed past about 700 GeV. The LHC should be able to discover it or rule it out.
Searches there involving decays to technipions and thence to heavy quark jets are
hampered by backgrounds from production; its rate is 100 times larger than that at
the Tevatron. Consequently, the discovery of low-scale technicolor at the LHC relies
on all-leptonic final-state channels with favorable signal-to-background ratios: , and .
[54]
[edit]
Dark matter

Technicolor theories naturally contain dark matter candidates. Almost certainly,


models can be built in which the lowest-lying technibaryon, a technicolor-singlet
bound state of technifermions, is stable enough to survive the evolution of the
universe.[41][55] If the technicolor theory is low-scale (), the baryon's mass should
be no more than 1–2 TeV. If not, it could be much heavier. The technibaryon must be
electrically neutral and satisfy constraints on its abundance. Given the limits on spin-
independent dark-matter-nucleon cross sections from dark-matter search
experiments ( for the masses of interest[56]), it may have to be electroweak neutral
(weak isospin I = 0) as well. These considerations suggest that the "old" technicolor
dark matter candidates may be difficult to produce at the LHC.

A different class of technicolor dark matter candidates light enough to be accessible


at the LHC was introduced by Francesco Sannino and his collaborators.[57] These
states are pseudo Goldstone bosons possessing a global charge that makes them
stable against decay.

Topcolor
From Wikipedia, the free encyclopedia

In theoretical physics, Topcolor is a model of dynamical electroweak symmetry


breaking in which the top quark and anti-top quark form a top quark condensate and
act effectively like the Higgs boson. This is analogous to the phenomenon of
superconductivity.

Topcolor naturally involves an extension of the standard model color gauge group to
a product group SU(3)xSU(3)xSU(3)x... One of the gauge groups contains the top and
bottom quarks, and has a sufficiently large coupling constant to cause the
condensate to form. The topcolor model thus anticipates the idea of dimensional
deconstruction and extra space dimensions, as well as the large mass of the top
quark. Topcolor, and its prediction of "topgluons," will be tested in coming
experiments at the Large Hadron Collider at CERN.

Topcolor rescues the Technicolor model from some of its difficulties in a scheme
dubbed "Topcolor-assisted Technicolor."
In particle physics, the top quark condensate theory is an alternative to the Standard
Model in which a fundamental scalar Higgs field is replaced by a composite field
composed of the top quark and its antiquark. These are bound by a four-fermion
interaction, analogous to Cooper pairs in a BCS superconductor and nucleons in the
Nambu-Jona-Lasinio model. The top quark condenses because its measured mass is
approximately 173 GeV (comparable to the electroweak scale), and so its Yukawa
coupling is of order unity, yielding the possibility of strong coupling dynamics.

http://en.wikipedia.org/wiki/Color_confinement

Color confinement
From Wikipedia, the free encyclopedia

The color force favors confinement because at a certain range it is more energetically
favorable to create a quark-antiquark pair than to continue to elongate the color flux
tube. This is analoguous to the behavior of an elongated rubber-band.

Color confinement, often simply called confinement, is the physics phenomenon that
color charged particles (such as quarks) cannot be isolated singularly, and therefore
cannot be directly observed.[1] Quarks, by default, clump together to form groups, or
hadrons. The two types of hadrons are the mesons (one quark, one antiquark) and
the baryons (three quarks). The constituent quarks in a group cannot be separated
from their parent hadron, and this is why quarks can never be studied or observed in
any more direct way than at a hadron level.[2]Contents [hide]
1 Origin
2 Models exhibiting confinement
3 See also
4 References
5 External links

[edit]
Origin

The reasons for quark confinement are somewhat complicated; no analytic proof
exists that quantum chromodynamics should be confining, but intuitively,
confinement is due to the force-carrying gluons having color charge. As any two
electrically-charged particles separate, the electric fields between them diminish
quickly, allowing (for example) electrons to become unbound from atomic nuclei.
However, as two quarks separate, the gluon fields form narrow tubes (or strings) of
color charge, which tend to bring the quarks together as though they were some kind
of rubber band. This is quite different in behavior from electrical charge. Because of
this behavior, the color force experienced by the quarks in the direction to hold them
together, remains constant, regardless of their distance from each other.[3][4]

The color force between quarks is large, even on a macroscopic scale, being on the
order of 100,000 newtons.[citation needed] As discussed above, it is constant, and
does not decrease with increasing distance after a certain point has been passed.

When two quarks become separated, as happens in particle accelerator collisions, at


some point it is more energetically favorable for a new quark–antiquark pair to
spontaneously appear, than to allow the tube to extend further. As a result of this,
when quarks are produced in particle accelerators, instead of seeing the individual
quarks in detectors, scientists see "jets" of many color-neutral particles (mesons and
baryons), clustered together. This process is called hadronization, fragmentation, or
string breaking, and is one of the least understood processes in particle physics.

The confining phase is usually defined by the behavior of the action of the Wilson
loop, which is simply the path in spacetime traced out by a quark–antiquark pair
created at one point and annihilated at another point. In a non-confining theory, the
action of such a loop is proportional to its perimeter. However, in a confining theory,
the action of the loop is instead proportional to its area. Since the area will be
proportional to the separation of the quark–antiquark pair, free quarks are
suppressed. Mesons are allowed in such a picture, since a loop containing another
loop in the opposite direction will have only a small area between the two loops.
[edit]
Models exhibiting confinement

Besides QCD in 4D, another model which exhibits confinement is the Schwinger
model.[citation needed] Compact Abelian gauge theories also exhibit confinement in
2 and 3 spacetime dimensions.[citation needed] Confinement has recently been
found in elementary excitations of magnetic systems called spinons.[5]
[edit]
See also
Quantum chromodynamics
Asymptotic freedom
Deconfining phase
Quantum mechanics
Particle physics
Fundamental force
Dual superconducting model

http://en.wikipedia.org/wiki/Dual_superconducting_model

In the theory of quantum chromodynamics, dual superconductor models attempt to


explain confinement of quarks in terms of an electromagnetic dual theory of
superconductivity.

In an electromagnetic dual theory the roles of electric and magnetic fields are
interchanged. The BCS theory of superconductivity explains superconductivity as the
result of the condensation electric chargers to cooper pairs. In a dual superconductor
an analogous effect occurs through the condensation of magnetic charges (also
called magnetic monopoles). In ordinary electromagnetic theory, no monopoles have
been shown to exist. However, in quantum chromodynamics — the theory of colour
charge which explains the strong interaction between quarks — the colour charges
can be view as (non-abelian) analogues of electric charges and corresponding
magnetic monopoles are known to exist. Dual superconductor models posit that
condensation of these magnetic monopoles in a superconductive state explains
colour confinement — the phenomenon that only neutrally coloured bound states are
observed at low energies.

Qualitatively, confinement in dual superconductor models can be understood as a


result of the dual to the Meissner effect. The Meissner effect says that a
superconducting metal will try to expel magnetic field lines from its interior. If a
magnetic field is forced to run through the superconductor, the field lines are
compressed in magnetic flux tubes. In a dual superconductor the roles of magnetic
and electric fields are exchanged and the Meissner effect tries to expel electric field
lines. Quarks and antiquarks carry opposite colour charges, and for a quark–antiquark
pair 'electric' field lines run from the quark to the antiquark. If the quark–antiquark
pair are immersed in a dual superconductor, then the electric field lines get
compressed to a flux tube. The energy associated to the tube is proportional to its
length, and the potential energy of the quark–antiquark is proportional to their
separation. A quark–antiquark will therefore always bind regardless of their
separation, which explains why no unbound quarks are ever found.[note 1]

Dual superconductors are described by (a dual to) the Landau–Ginzburg model, which
is equivalent to the Abelian Higgs model.

The dual superconductor model is motivated by several observations in calculations


using lattice gauge theory. The model, however, also has some shortcomings. In
particular, although it confines coloured quarks, it fails to confine colour of some
gluons, allowing coloured bound states at energies observable in particle colliders.

http://en.wikipedia.org/wiki/Lattice_gauge_theory

In physics, lattice gauge theory is the study of gauge theories on a spacetime that
has been discretized into a lattice. Gauge theories are important in particle physics,
and include the prevailing theories of elementary particles: quantum
electrodynamics, quantum chromodynamics (QCD) and the Standard Model. Non-
perturbative gauge theory calculations in continuous spacetime formally involve
evaluating an infinite-dimensional path integral, which is computationally intractable.
By working on a discrete spacetime, the path integral becomes finite-dimensional,
and can be evaluated by stochastic simulation techniques such as the Monte Carlo
method. When the size of the lattice is taken infinitely large and its sites
infinitesimally close to each other, the continuum gauge theory is recovered
intuitively. A mathematical proof of this fact is lacking.Contents [hide]
1 Basics
2 Yang–Mills action
3 Measurements
4 Other applications
5 See also
6 Further reading
7 External links
8 References

[edit]
Basics

In lattice gauge theory, the spacetime is Wick rotated into Euclidean space and
discretized into a lattice with sites separated by distance a and connected by links. In
the most commonly-considered cases, such as lattice QCD, fermion fields are defined
at lattice sites (which leads to fermion doubling), while the gauge fields are defined
on the links. That is, an element U of the compact Lie group G is assigned to each
link. Hence to simulate QCD, with Lie group SU(3), there is a 3×3 special unitary matrix
defined on each link. The link is assigned an orientation, with the inverse element corresponding
to the same link with the opposite orientation.
[edit]
Yang–Mills action

The Yang–Mills action is written on the lattice using Wilson loops (named after Kenneth G.
Wilson), so that the limit formally reproduces the original continuum action.[1] Given a faithful
irreducible representation ρ of G, the lattice Yang-Mills action is the sum over all lattice
sites of the (real component of the) trace over the n links e1, ..., en in the Wilson
loop,
Here, χ is the character. If ρ is a real (or pseudoreal) representation, taking the real
component is redundant, because even if the orientation of a Wilson loop is flipped,
its contribution to the action remains unchanged.

There are many possible lattice Yang-Mills actions, depending on which Wilson loops
are used in the action. The simplest "Wilson action" uses only the 1×1 Wilson loop, and
differs from the continuum action by "lattice artifacts" proportional to the small lattice spacing a.
By using more complicated Wilson loops to construct "improved actions", lattice artifacts can be
reduced to be proportional to a2, making computations more accurate.
[edit]
Measurements

Quantities such as particle masses are stochastically calculated using techniques such as the
Monte Carlo method. Gauge field configurations are generated with probabilities proportional to e
− βS, where S is the lattice action and β is related to the lattice spacing a. The
quantity of interest is calculated for each configuration, and averaged. Calculations
are often repeated at different lattice spacings a so that the result can be
extrapolated to the continuum, .

Such calculations are often extremely computationally intensive, and can require the
use of the largest available supercomputers. To reduce the computational burden,
the so-called quenched approximation can be used, in which the fermionic fields are
treated as non-dynamic "frozen" variables. While this was common in early lattice
QCD calculations, "dynamical" fermions are now standard.[2] These simulations
typically utilize algorithms based upon molecular dynamics or microcanonical
ensemble algorithms.[3][4]
[edit]
Other applications

Originally, solvable two-dimensional lattice gauge theories had already been


introduced in 1971 as models with interesting statistical properties by the theorist
Franz Wegner, who worked in the field of phase transitions.[5]

Lattice gauge theory has been shown to be exactly dual to spin foam models
provided that only 1×1 Wilson loops appear in the action.
[edit]
See also
Hamiltonian lattice gauge theory
Lattice field theory
Lattice QCD
Quantum triviality

http://en.wikipedia.org/wiki/Lattice_QCD

Lattice QCD is a well-established non-perturbative approach to solving the quantum


chromodynamics (QCD) theory of quarks and gluons. It is a lattice gauge theory formulated on a
grid or lattice of points in space and time.

Analytic or perturbative solutions in low-energy QCD are hard or impossible due to the highly
nonlinear nature of the strong force. This formulation of QCD in discrete rather than continuous
spacetime naturally introduces a momentum cut off at the order 1/a, where a is the lattice
spacing, which regularizes the theory. As a result lattice QCD is mathematically well-defined.
Most importantly, lattice QCD provides a framework for investigation of non-perturbative
phenomena such as confinement and quark-gluon plasma formation, which are intractable by
means of analytic field theories.

In lattice QCD, fields representing quarks are defined at lattice sites (which leads to fermion
doubling), while the gluon fields are defined on the links connecting neighboring sites. This
approximation approaches continuum QCD as the spacing between lattice sites is reduced to
zero. Because the computational cost of numerical simulations can increase dramatically as the
lattice spacing decreases, results are often extrapolated to a = 0 by repeated calculations at
different lattice spacings a that are large enough to be tractable.

Numerical lattice QCD calculations using Monte Carlo methods can be extremely computationally
intensive, requiring the use of the largest available supercomputers. To reduce the computational
burden, the so-called quenched approximation can be used, in which the quark fields are treated
as non-dynamic "frozen" variables. While this was common in early lattice QCD calculations,
"dynamical" fermions are now standard.[1] These simulations typically utilize algorithms based
upon molecular dynamics or microcanonical ensemble algorithms.[2][3]

At present, lattice QCD is primarily applicable at low densities where the numerical sign problem
does not interfere with calculations. Lattice QCD predicts that confined quarks will become
released to quark-gluon plasma around energies of 170 MeV. Monte Carlo methods are free from
the sign problem when applied to the case of QCD with gauge group SU(2) (QC2D).

Lattice QCD has already made successful contact with many experiments. For example the mass
of the proton has been determined theoretically with an error of less than 2 percent.[4]

Lattice QCD has also been used as a benchmark for high-performance computing, an approach
originally developed in the context of the IBM Blue Gene supercomputer.Contents [hide]
1 Techniques
1.1 Monte-Carlo simulations
1.2 Fermions on the lattice
1.3 Lattice perturbation theory
2 See also
3 Notes
4 Further reading
5 External links

[edit]
Techniques
[edit]
Monte-Carlo simulations

A frame from a Monte-Carlo simulation illustrating the typical four-dimensional structure of gluon-
field configurations used in describing the vacuum properties of QCD.

Monte-Carlo is a method to pseudo-randomly sample a large space of variables. The importance


sampling technique used to select the gauge configurations in the Monte-Carlo simulation
imposes the use of Euclidean time, by a Wick rotation of space-time.

In lattice Monte-Carlo simulations the aim is to calculate correlation functions. This is done by
explicitly calculating the action, using field configurations which are chosen according to the
distribution function, which depends on the action and the fields. Usually one starts with the
gauge bosons part and gauge-fermion interaction part of the action to calculate the gauge
configurations, and then uses the simulated gauge configurations to calculate hadronic
propagators and correlation functions.
[edit]
Fermions on the lattice
Lattice QCD is a way to solve the theory exactly from first principles, without any assumptions, to
the desired precision. However, in practice the calculation power is limited, which requires a
smart use of the available resources. One needs to choose an action which gives the best
physical description of the system, with minimum errors, using the available computational power.
The limited computer resources force one to use physical constants which are different from their
true physical values:
The lattice discretization means a finite lattice spacing and size, which do not exist in the
continuous and infinite space-time. In addition to the automatic error introduced by this, the
limited resources force the use of smaller physical lattices and larger lattice spacing than wanted
in order to minimize errors.
Another unphysical quantity is the quark masses. Quark masses are steadily going down, but to-
date (2010) they are typically too high with respect to the real value.

In order to compensate for the errors one improves the lattice action in various ways, to minimize
mainly finite spacing errors.
[edit]
Lattice perturbation theory

The lattice was initially introduced by Wilson as a framework for studying strongly coupled
theories, such as QCD, non-perturbatively. it was found to be a regularization also suitable for
perturbative calculations. Perturbation theory involves an expansion in the coupling constant, and
is well-justified in high-energy QCD where the coupling constant is small, while it fails completely
when the coupling is large and higher order corrections are larger than lower orders in the
perturbative series. In this region non-perturbative methods, such as Monte-Carlo sampling of the
correlation function, are necessary.

Lattice perturbation theory can also provide results for condensed matter theory. One can use the
lattice to represent the real atomic crystal. In this case the lattice spacing is a real physical value,
and not an artifact of the calculation which has to be removed, and a quantum field theory can be
formulated and solved on the physical lattice.
[edit]
See also
Lattice field theory
Lattice gauge theory
QCD matter
QCD sum rules

http://en.wikipedia.org/wiki/Hamiltonian_lattice_gauge_theory

In physics, Hamiltonian lattice gauge theory is a calculational approach to gauge theory and a
special case of lattice gauge theory in which the space is discretized but time is not. The
Hamiltonian is then re-expressed as a function of degrees of freedom defined on a d-dimensional
lattice.

Following Wilson, the spatial components of the vector potential are replaced with Wilson lines
over the edges, but the time component is associated with the vertices. However, the temporal
gauge is often employed, setting the electric potential to zero. The eigenvalues of the Wilson line
operators U(e) (where e is the (oriented) edge in question) take on values on the Lie group G. It is
assumed that G is compact, otherwise we run into many problems. The conjugate operator to
U(e) is the electric field E(e) whose eigenvalues take on values in the Lie algebra . The
Hamiltonian receives contributions coming from the plaquettes (the magnetic contribution) and
contributions coming from the edges (the electric contribution).

Hamiltonian lattice gauge theory is exactly dual to a theory of spin networks. This involves using
the Peter-Weyl theorem. In the spin network basis, the spin network states are eigenstates of the
operator Tr[E(e)2].

http://en.wikipedia.org/wiki/Asymptotic_freedom

In physics, asymptotic freedom is a property of some gauge theories that causes interactions
between particles to become arbitrarily weak at energy scales that become arbitrarily large, or,
equivalently, at length scales that become arbitrarily small (at the shortest distances).

Asymptotic freedom is a feature of quantum chromodynamics (QCD), the quantum field theory of
the nuclear interaction between quarks and gluons, the fundamental constituents of nuclear
matter. Quarks interact weakly at high energies, allowing perturbative calculations by DGLAP of
cross sections in deep inelastic processes of particle physics; and strongly at low energies,
preventing the unbinding of baryons (like protons or neutrons with three quarks) or mesons (like
pions with two quarks), the composite particles of nuclear matter.

Asymptotic freedom was discovered by Frank Wilczek, David Gross, and David Politzer who in
2004 shared the Nobel Prize in physics.Contents [hide]
1 Discovery
2 Screening and antiscreening
3 Calculating asymptotic freedom
4 See also
5 References

[edit]
Discovery

Asymptotic freedom was discovered in 1973 by David Gross and Frank Wilczek, and by David
Politzer. Although these authors were the first to understand the physical relevance to the strong
interactions, in 1969 Iosif Khriplovich discovered asymptotic freedom in the SU(2) gauge theory
as a mathematical curiosity, and Gerardus 't Hooft in 1972 also noted the effect but did not
publish. For their discovery, Gross, Wilczek and Politzer were awarded the Nobel Prize in Physics
in 2004.

The discovery was instrumental in rehabilitating quantum field theory. Prior to 1973, many
theorists suspected that field theory was fundamentally inconsistent because the interactions
become infinitely strong at short-distances. This phenomenon is usually called a Landau pole,
and it defines the smallest length scale that a theory can describe. This problem was discovered
in field theories of interacting scalars and spinors, including quantum electrodynamics, and
Lehman positivity led many to suspect that it is unavoidable. Asymptotically free theories become
weak at short distances, there is no Landau pole, and these quantum field theories are believed
to be completely consistent down to any length scale.

While the Standard Model is not entirely asymptotically free, in practice the Landau pole can only
be a problem when thinking about the strong interactions. The other interactions are so weak that
any inconsistency can only arise at distances shorter than the Planck length, where a field theory
description is inadequate anyway.
[edit]
Screening and antiscreening

Charge screening in QED

The variation in a physical coupling constant under changes of scale can be understood
qualitatively as coming from the action of the field on virtual particles carrying the relevant charge.
The Landau pole behavior of quantum electrodynamics (QED, related to quantum triviality) is a
consequence of screening by virtual charged particle-antiparticle pairs, such as electron-positron
pairs, in the vacuum. In the vicinity of a charge, the vacuum becomes polarized: virtual particles
of opposing charge are attracted to the charge, and virtual particles of like charge are repelled.
The net effect is to partially cancel out the field at any finite distance. Getting closer and closer to
the central charge, one sees less and less of the effect of the vacuum, and the effective charge
increases.

In QCD the same thing happens with virtual quark-antiquark pairs; they tend to screen the color
charge. However, QCD has an additional wrinkle: its force-carrying particles, the gluons,
themselves carry color charge, and in a different manner. Each gluon carries both a color charge
and an anti-color magnetic moment. The net effect of polarization of virtual gluons in the vacuum
is not to screen the field, but to augment it and affect its color. This is sometimes called
antiscreening. Getting closer to a quark diminishes the antiscreening effect of the surrounding
virtual gluons, so the contribution of this effect would be to weaken the effective charge with
decreasing distance.

Since the virtual quarks and the virtual gluons contribute opposite effects, which effect wins out
depends on the number of different kinds, or flavors, of quark. For standard QCD with three
colors, as long as there are no more than 16 flavors of quark (not counting the antiquarks
separately), antiscreening prevails and the theory is asymptotically free. In fact, there are only 6
known quark flavors.
[edit]
Calculating asymptotic freedom

Asymptotic freedom can be derived by calculating the beta-function describing the variation of the
theory's coupling constant under the renormalization group. For sufficiently short distances or
large exchanges of momentum (which probe short-distance behavior, roughly because of the
inverse relation between a quantum's momentum and De Broglie wavelength), an asymptotically
free theory is amenable to perturbation theory calculations using Feynman diagrams. Such
situations are therefore more theoretically tractable than the long-distance, strong-coupling
behavior also often present in such theories, which is thought to produce confinement.

Calculating the beta-function is a matter of evaluating Feynman diagrams contributing to the


interaction of a quark emitting or absorbing a gluon. In non-abelian gauge theories such as QCD,
the existence of asymptotic freedom depends on the gauge group and number of flavors of
interacting particles. To lowest nontrivial order, the beta-function in an SU(N) gauge theory with nf
kinds of quark-like particle is

where α is the theory's equivalent of the fine-structure constant, g2 / (4π) in the units
favored by particle physicists. If this function is negative, the theory is asymptotically
free. For SU(3), the color charge gauge group of QCD, the theory is therefore
asymptotically free if there are 16 or fewer flavors of quarks.

For SU(3) N = 3, and β1 < 0 gives

http://en.wikipedia.org/wiki/Anomalous_scaling_dimension

In theoretical physics, by anomaly one usually means that the symmetry remains
broken when the symmetry-breaking factor goes to zero. When the symmetry which
is broken is scale invariance, then true power laws usually cannot be found from
dimensional reasoning like in turbulence or quantum field theory. In the latter, the
anomalous scaling dimension of an operator is the contribution of quantum
mechanics to the classical scaling dimension of that operator.

The classical scaling dimension of an operator O is determined by dimensional


analysis from the Lagrangian (in 4 spacetime dimensions this means dimension 1 for
elementary bosonic fields including the vector potentials, 3/2 for elementary
fermionic fields etc.). However if one computes the correlator of two operators of this
type, one often finds logarithmic divergences arising from one-loop Feynman
diagrams. The expansion in the coupling constant has the schematic form

where g is a coupling constant, Δ0 is the classical dimension, and Λ is an ultraviolet


cutoff (the maximal allowed energy in the loop integrals). A is a constant that
appears in the loop diagrams. The expression above may be viewed as a Taylor
expansion of the full quantum dimension.

The term g2A is the anomalous scaling dimension while Δ is the full dimension.
Conformal field theories are typically strongly coupled and the full dimension cannot
be easily calculated by Taylor expansions. The full dimensions in this case are often
called critical exponents. These operators describe conformal bound states with a
continuous mass spectrum.

In particular, 2Δ = d − 2 + η for the critical exponent η for a scalar operator. We


have an anomalous scaling dimension when η ≠ 0.

An anomalous scaling dimension indicates a scale dependent wavefunction


renormalization.

Anomalous scaling appears also in classical physics.

http://www.scientificamerican.com/article.cfm?id=the-amazing-disappearing-neutrino

The Amazing Disappearing Antineutrino

A revised calculation suggests that around 3% of particles have gone missing from nuclear
reactor experiments.

| April 1, 2011 | 10

By Eugenie Samuel Reich of Nature magazine

Neutrinos have long perplexed physicists with their uncanny ability to evade detection, with as
many as two-thirds of the ghostly particles apparently going missing en route from the Sun to
Earth. Now a refined version of an old calculation is causing a stir by suggesting that researchers
have also systematically underestimated the number of the particles' antimatter partners--
antineutrinos--produced by nuclear reactor experiments.

The deficit could be caused by the antineutrinos turning into so-called 'sterile antineutrinos', which
can't be directly detected, and which would be clear evidence for effects beyond the standard
model of particle physics.

In the 1960s, physicist Ray Davis, working deep underground in the Homestake gold mine in
South Dakota, found that the flux of solar neutrinos hitting Earth was a third of that predicted by
calculations of the nuclear reactions in the Sun by theorist John Bahcall. Davis later received a
Nobel prize for his contributions to neutrino astrophysics. That puzzle was considered solved in
2001, when the Sudbury Neutrino Observatory (SNO) in Canada found the missing two-thirds
through an alternative means of detection. The SNO's results were taken as evidence that
neutrinos have a mass, which allows them to oscillate between three flavors: electron, muon and
tau. Davis had only detected the electron neutrinos.

Experiments that measure the rate of antineutrino production from the decay of uranium and
plutonium isotopes have so far produced results roughly consistent with this theory. But the
revised calculation accepted this week by Physical Review D suggests that it's not the whole
story. While waiting for the Double Chooz neutrino experiment in France to become fully
operational, Thierry Lasserre and his colleagues at the French atomic energy commission(CEA)
in Saclay set out to check predictions of the rate of antineutrino production by nuclear reactors.
They repeated a calculation first done in the 1980s by Klaus Schreckenbach at the Technical
University of Munich, using more modern techniques that allowed them to be much more precise.

Their new estimate of the rate of production is around 3% more than previously predicted. This
means that several generations of neutrino and antineutrino experiments have unknowingly
missed a small fraction of the particles. "It was completely a surprise for us," says Lasserre.

Double Chooz consists of two detectors measuring the flux of antineutrinos produced by the
Chooz nuclear power plant in the French Ardennes, one detector about 400 meters away from
the plant and the other 1 kilometer away. The far detector became operational this year.

Stefan Schönert, a neutrino physicist at the Technical University of Munich, says the calculation is
solid, and has been checked with Schreckenbach. "They can reproduce each other's results.
There's no way around this result. It's very solid."

Art McDonald of Queen's University in Kingston, Canada and the SNO says that people have to
look carefully at the calculation, which may itself have a systematic error. But, he adds, "there's
no doubt it would have significance as a physics result if it can be shown with more accuracy."

The result may be pointing to evidence of neutrinos and antineutrinos oscillating into a fourth kind
of neutrino or antineutrino, a so-called 'sterile' version that doesn't interact with ordinary matter,
says Carlo Giunti, a physicist at the University of Turin in Italy. Other experiments have previously
seen evidence for sterile particles, including the Liquid Scintillator Neutrino Detector at Los
Alamos National Laboratory in New Mexico and the Mini Booster Neutrino Experiment, or
MiniBooNE, at Fermilab in Batavia, Illinois, and the search to confirm their existence is a hot area
of physics.

Giunti says that the magnitude of the anomaly uncovered by Lasserre is not statistically
significant on its own, but that it points promisingly in the same direction as another anomaly
found by the SAGE collaboration, which studied neutrinos from a radioactive source at the
Baksan Neutrino Observatory in the Caucasus in 2005. "Before this, there used to be a
contradiction between [reactor and radioactive source] experiments but now they are in
agreement," says Giunti.

Schönert says that one key experiment everyone is waiting for is a measurement showing that
the rate of disappearance of antineutrinos from a source increases with the distance from it. "This
would be the smoking gun," he says.

This article is reproduced with permission from the magazine Nature. The article was first
published on April 1, 2011.

http://arxiv.org/abs/astro-ph/0310571

A Map of the Universe


J. Richard Gott III, Mario Jurić, David Schlegel, Fiona Hoyle, Michael Vogeley, Max
Tegmark, Neta Bahcall, Jon Brinkmann
(Submitted on 20 Oct 2003 (v1), last revised 17 Oct 2005 (this version, v2))
We have produced a new conformal map of the universe illustrating recent
discoveries, ranging from Kuiper belt objects in the Solar system, to the galaxies and
quasars from the Sloan Digital Sky Survey. This map projection, based on the
logarithm map of the complex plane, preserves shapes locally, and yet is able to
display the entire range of astronomical scales from the Earth's neighborhood to the
cosmic microwave background. The conformal nature of the projection, preserving
shapes locally, may be of particular use for analyzing large scale structure. Prominent
in the map is a Sloan Great Wall of galaxies 1.37 billion light years long, 80% longer
than the Great Wall discovered by Geller and Huchra and therefore the largest
observed structure in the universe. Comments: Figure 8, and additional material
accessible on the web at: this http URL
Subjects: Astrophysics (astro-ph)
Journal reference: Astrophys.J.624:463,2005
DOI: 10.1086/428890
Cite as: arXiv:astro-ph/0310571v2

http://www.astro.pri...n.edu/universe/

Logarithmic Maps of the Universe

This website contains figures from "Map of the Universe" e-print, by Gott, Juric et al.
The paper has been published in the Astrophysical Journal (Gott et al., 2005, ApJ,
624, 463), and you can also find the manuscript here (note: Figure 8. of the
manuscript has been published as an inset poster, and has to be downloaded
separately (see below)).

The Great Walls -- Largest Structures in the Universe

“Just as a fish may be barely aware of the medium in which it lives and swims, so the
microstructure of empty space could be far too complex for unaided human brains."

Sir Martin Rees, Astronomer Royal, physicist, Cambridge University


Our known Hubble length universe contains hundreds of millions of galaxies that
have clumped together, forming super clusters and a series of massive walls of
galaxies separated by vast voids of empty space.

Great Wall: The most vast structure ever is a collection of superclusters a billion light
years away extending for 5% the length of the entire observable universe. It is
theorized that such structures as the Great Wall form along and follow web-like
strings of dark matter that dictates the structure of the Universe on the grandest of
scales. Dark matter gravitationally attracts baryonic matter, and it is this normal
matter that astronomers see forming long, thin walls of super-galactic clusters.

If it took God one week to make the Earth, going by mass it would take him two
quintillion years to build this thing -- far longer than science says the universe has
existed, and it's kind of fun to have those two the other way around for a change.
Though He could always omnipotently cheat and say "Let there be a Sloan Great
Wall."

The Great Wall is a massive array of astronomical objects named after the
observations which revealed them, the Sloan Digital Sky Survey. An eight year
project scanned over a quarter of the sky to generate full 3-D maps of almost a
million galaxies. Analysis of these images revealed a huge panel of galaxies 1.37
billion light years long, and even the pedantic-sounding .07 is six hundred and sixty
billion trillion kilometers. This is science precisely measuring made-up sounding
numbers.

Sloane_9: This isn't the only wall out there -- others exist, all with far greater lengths
than width or depth, actual sheets of galaxies forming some of the most impressive
anythings there are. And these walls are only a special class of galactic filaments,
long strings of matter stretched between mind-breaking expanses of emptiness.

Some of these elongated super clusters have formed a series of walls, one after
another, spaced from 500 million to 800 million light years apart, such that in one
direction alone, 13 Great Walls have formed with the inner and outer walls separated
by less than seven billion light years.

Recently, cosmologists have estimated that some of these galactic walls may have
taken from 80 billion to 100 billion, to 150 billion years to form in a direct challenge
to current age estimates of the age of the Universe following the Big Bang.

The huge Sloan Great Wall spans over one billion light years. The Coma cluster
(image above) is one of the largest observed structures in the Universe, containing
over 10,000 galaxies and extending more than 1.37 billion light years in length.

Current theories of "dark energy" and "great attractors" have been developed to
explain why a created universe did not spread out uniformly at the same speed and
in the same spoke-like directions as predicted by theory. But as Sean Carroll of the
Moore Center for Theoretical Cosmology and Physics at Cal Tech is fond of saying,
"We don't have a clue."

Britain’s Astronomer Royal, Lord Rees, says some of the cosmos’s biggest mysteries,
like the Big Bang and even the nature of our own self awareness, might never be
resolved. Rees, who is also President of the Royal Society, says that a correct basic
theory of the universe might be present, but may be just too tough for human beings’
brains to comprehend.

http://www.space.com...black-hole.html

Images from NASA's Swift satellite were combined in this UV/optical/X-ray view of the
explosion, which is known as GRB 110328A. The blast was detected in X-rays, which
were collected on March 28.
CREDIT: NASA/Swift/Stefan Immler
View full size image

A huge, powerful star explosion detonated in deep space last week — an ultra-bright
conflagaration that has astronomers scratching their heads over exactly how it
happened.
The explosion may be the death cry of a star as it was ripped apart by a black hole,
scientists said. High-energy radiation continues to brighten and fade from the March
28 blast's location, about 3.8 billion light-years from Earth in the constellation Draco.
[Image of the space explosion]

Astronomers say they've never witnessed an explosion so bright, long-lasting and


variable before, according to NASA officials.

The explosion looks like a gamma-ray burst — the most powerful type of explosion in
the universe, which usually mark the destruction of a massive star — but the flaring
emissions from these dramatic events never last more than a few hours, researchers
said.

"We know of objects in our own galaxy that can produce repeated bursts, but they
are thousands to millions of times less powerful than the bursts we are seeing now,"
said Andrew Fruchter, of the Space Telescope Science Institute in Baltimore, in a
statement today (April 7). "This is truly extraordinary."

This is a visible-light image of GRB 110328A's host galaxy (arrow) taken on April 4 by
the Hubble Space Telescope's Wide Field Camera 3. The galaxy is 3.8 billion light-
years away.
CREDIT: NASA/ESA/A. Fruchter (STScI)
View full size image

Massive explosion in space

Scientists are using several NASA space observatories, working in concert, to study
the massive blast.

The space explosion was detected on March 28 when an instrument on NASA's Swift
satellite detected an X-ray eruption, the first in a series of powerful blasts. The Swift
observatory determined a rough position for the explosion, which scientists are now
calling the gamma-ray burst (GRB) 110328A. [Video: What Makes the Brightest Flash
in the Universe?]

After Swift's discovery, an image taken by the Hubble Space Telescope on Monday
(April 4) pinpointed the exact source of the blast — the center of a small galaxy in
the Draco constellation. That same day, astronomers used the Chandra X-ray
Observatory to make a four-hour exposure of the puzzling source.

Although research is ongoing, astronomers say that the unusual explosion likely
arose when a star wandered too close to its galaxy's central black hole. Intense tidal
forces probably tore the star apart, and the infalling gas continues to stream toward
the black hole.

According to this model, the spinning black hole formed an outflowing jet, which is
blasting powerful X-rays and gamma rays in our direction, researchers said.

"The fact that the explosion occurred in the center of a galaxy tells us it is most likely
associated with a massive black hole," said Neil Gehrels, the lead scientist for Swift at
NASA's Goddard Space Flight Center, in a statement. "This solves a key question
about the mysterious event." [Photos: Black Holes of the Universe]
Looking down the barrel of the jet

Most galaxies, including our own, contain central black holes with millions of times
the mass of our sun. The disrupted star probably succumbed to a black hole less
massive than the one at the heart of our Milky Way galaxy. The Milky Way's central
black hole has a mass that is about 4 million times that of the sun, researchers said.

GRB 110328A has repeatedly flared in the days following its discovery by Swift. This
plot shows the brightness changes recorded by Swift's X-ray Telescope.
CREDIT: NASA/Swift/Penn State/J. Kennea
View full size image

Astronomers have detected stars disrupted by supermassive black holes before, but
none have shown the X-ray brightness and variability seen in GRB 110328A, which
has flared repeatedly. Since April 3, for example, it has brightened by more than five
times.

Scientists think that the X-rays may be coming from matter moving near the speed of
light in a particle jet that forms as the star's gas falls toward the black hole.

"The best explanation at the moment is that we happen to be looking down the barrel
of this jet," said Andrew Levan at the University of Warwick in the United Kingdom,
who led the Chandra observations. "When we look straight down these jets, a
brightness boost lets us view details we might otherwise miss."

Astronomers plan additional Hubble observations to see if the galaxy's core changes
brightness.

http://www.nasa.gov/...-106_SWIFT.html

NASA Telescopes Join Forces To Observe Unprecedented Explosion

WASHINGTON -- NASA's Swift satellite, Hubble Space Telescope and Chandra X-ray
Observatory have teamed up to study one of the most puzzling cosmic blasts ever
observed. More than a week later, high-energy radiation continues to brighten and
fade from its location.
Astronomers say they have never seen such a bright, variable, high-energy, long-
lasting burst before. Usually, gamma-ray bursts mark the destruction of a massive
star, and flaring emission from these events never lasts more than a few hours.

Although research is ongoing, astronomers feel the unusual blast likely arose when a
star wandered too close to its galaxy's central black hole. Intense tidal forces
probably tore the star apart, and the infalling gas continues to stream toward the
hole. According to this model, the spinning black hole formed an outflowing jet along
its rotational axis. A powerful blast of X- and gamma rays is seen when the jet is
pointed in our direction.

On March 28, Swift's Burst Alert Telescope discovered the source in the constellation
Draco when it erupted with the first in a series of powerful blasts.

"We know of objects in our own galaxy that can produce repeated bursts, but they
are thousands to millions of times less powerful than the bursts we are seeing. This is
truly extraordinary," said Andrew Fruchter at the Space Telescope Science Institute in
Baltimore.
Swift determined a position for the explosion, which now is cataloged as gamma-ray
burst (GRB) 110328A, and informed astronomers worldwide.

As dozens of telescopes turned to study the spot, astronomers quickly noticed a


small, distant galaxy very near the Swift position. A deep image taken by Hubble on
Monday, April 4, pinpointed the source of the explosion at the center of this galaxy,
which lies 3.8 billion light-years away from Earth. That same day, astronomers used
NASA's Chandra X-ray Observatory to make a four-hour-long exposure of the puzzling
source. The image, which locates the X-ray object 10 times more precisely than Swift,
shows it lies at the center of the galaxy Hubble imaged.

"We have been eagerly awaiting the Hubble observation," said Neil Gehrels, the lead
scientist for Swift at NASA's Goddard Space Flight Center in Greenbelt, Md. "The fact
that the explosion occurred in the center of a galaxy tells us it is most likely
associated with a massive black hole. This solves a key question about the
mysterious event."

Most galaxies, including our own, contain central black holes with millions of times
the sun's mass; those in the largest galaxies can be a thousand times larger. The
disrupted star probably succumbed to a black hole less massive than the Milky
Way's, which has a mass four million times that of our sun.

Astronomers previously have detected stars disrupted by supermassive black holes,


but none have shown the X-ray brightness and variability seen in GRB 110328A. The
source has undergone numerous flares. Since Sunday, April 3, for example, it has
brightened by more than five times.

Scientists think the X-rays may be coming from matter moving near the speed of
light in a particle jet that forms along the rotation axis of the spinning black hole as
the star's gas falls into a disk around the black hole.

"The best explanation at the moment is we happen to be looking down the barrel of
this jet," said Andrew Levan at the University of Warwick in the United Kingdom, who
led the Chandra observations. "When we look straight down these jets, a brightness
boost lets us view details we might otherwise miss."

This brightness increase, which is called relativistic beaming, occurs when matter
moving close to the speed of light is viewed nearly head on. Astronomers plan
additional Hubble observations to see if the galaxy's core changes brightness.

Goddard manages Swift and Hubble. NASA's Marshall Space Flight Center in
Huntsville, Ala., manages Chandra. Hubble was built and is operated in partnership
with the European Space Agency. Science operations for all three missions include
contributions from many national and international partners.

For more information and images associated with these observations, visit:

http://www.nasa.gov/...ntegration.html

Im particularly fascinated by spinning black holes and how they cause extreme frame
dragging of space-time in their environment. Even the earth has a bit of a frame
dragging effect, but nothing like what spinning black holes can do.

http://www.nasa.gov/...Gamma_Rays.html

Breakthrough Study Confirms Cause Of Short Gamma-Ray Bursts

WASHINGTON -- A new supercomputer simulation shows the collision of two neutron


stars can naturally produce the magnetic structures thought to power the high-speed
particle jets associated with short gamma-ray bursts (GRBs). The study provides the
most detailed glimpse of the forces driving some of the universe's most energetic
explosions.

The state-of-the-art simulation ran for nearly seven weeks on the Damiana computer
cluster at the Albert Einstein Institute (AEI) in Potsdam, Germany. It traces events
that unfold over 35 milliseconds -- about three times faster than the blink of an eye.

GRBs are among the brightest events known, emitting as much energy in a few
seconds as our entire galaxy does in a year. Most of this emission comes in the form
of gamma rays, the highest-energy form of light.

"For the first time, we've managed to run the simulation well past the merger and the
formation of the black hole," said Chryssa Kouveliotou, a co-author of the study at
NASA's Marshall Space Flight Center in Huntsville, Ala. "This is by far the longest
simulation of this process, and only on sufficiently long timescales does the magnetic
field grow and reorganize itself from a chaotic structure into something resembling a
jet."

GRBs longer than two seconds are the most common type and are widely thought to
be triggered by the collapse of a massive star into a black hole. As matter falls
toward the black hole, some of it forms jets in the opposite direction that move near
the speed of light. These jets bore through the collapsing star along its rotational axis
and produce a blast of gamma rays after they emerge. Understanding short GRBs,
which fade quickly, proved more elusive. Astronomers had difficulty obtaining precise
positions for follow-up studies.

That began to change in 2004, when NASA's Swift satellite began rapidly locating
bursts and alerting astronomers where to look.

"For more than two decades, the leading model of short GRBs was the merger of two
neutron stars," said co-author Bruno Giacomazzo at the University of Maryland and
NASA's Goddard Space Flight Center in Greenbelt, Md. "Only now can we show that
the merger of neutron stars actually produces an ultrastrong magnetic field
structured like the jets needed for a GRB."

A neutron star is the compressed core left behind when a star weighing less than
about 30 times the sun's mass explodes as a supernova. Its matter reaches densities
that cannot be reproduced on Earth -- a single spoonful outweighs the Himalayan
Mountains.

The simulation began with a pair of magnetized neutron stars orbiting just 11 miles
apart. Each star packed 1.5 times the mass of the sun into a sphere just 17 miles
across and generated a magnetic field about a trillion times stronger than the sun's.
In 15 milliseconds, the two neutron stars crashed, merged and transformed into a
rapidly spinning black hole weighing 2.9 suns. The edge of the black hole, known as
its event horizon, spanned less than six miles. A swirling chaos of superdense matter
with temperatures exceeding 18 billion degrees Fahrenheit surrounded the newborn
black hole. The merger amplified the strength of the combined magnetic field, but it
also scrambled it into disarray.

Over the next 11 milliseconds, gas swirling close to the speed of light continued to
amplify the magnetic field, which ultimately became a thousand times stronger than
the neutron stars' original fields. At the same time, the field became more organized
and gradually formed a pair of outwardly directed funnels along the black hole's
rotational axis.

This is exactly the configuration needed to power the jets of ultrafast particles that
produce a short gamma-ray burst. Neither of the magnetic funnels was filled with
high-speed matter when the simulation ended, but earlier studies have shown that
jet formation can occur under these conditions.

"By solving Einstein's relativity equations as never before and letting nature take its
course, we've lifted the veil on short GRBs and revealed what could be their central
engine," said Luciano Rezzolla, the study's lead author at AEI. "This is a long-awaited
result. Now it appears that neutron star mergers inevitably produce aligned jet-like
structures in an ultrastrong magnetic field."

The study is available online and will appear in the May 1 edition of The Astrophysical
Journal Letters.

The authors note the ultimate proof of the merger model will have to await the
detection of gravitational waves -- ripples in the fabric of space-time predicted by
relativity. Merging neutron stars are expected to be prominent sources, so the
researchers also computed what the model's gravitational-wave signal would look
like. Observatories around the world are searching for gravitational waves, so far
without success because the signals are so faint.

http://www.space.com...menal-rate.html

This illustration shows a gas disk orbiting a black hole, with X-rays pouring out of the
inner, white-shaded region. For the non-spinning black hole (left), this inner radius is
large. For the fast-spinning black hole (right), the gas can orbit very near the event
horizon, so the radius is much smaller.
CREDIT: NASA / NASA / CXC / M.Weiss
View full size image

X-ray vision has brought astronomers closer than ever to completely characterizing a
black hole, a place where strange things happen.

Astronomers measured the spinning speed of three black holes, finding that one
rotates at a breakneck 950 times per second, nearing its theoretical rotation limit of
1,150 spins a second. The black hole lies within the constellation Aquila (the Eagle)
about 35,000 light-years from Earth.

The finding represents an important step toward understanding these invisible


objects.

Mass and spin

When any mass, such as a star, becomes more compact than a certain limit, its own
gravity becomes so strong that the object collapses to a singular point, a black hole.
The spin of a star is thought to translate into spin of a black hole that forms from the
star's collapse. With its mass much more compact, the spin rate ought to be
phenomenal, much like a skater pulls in his arms to increase speed when performing
a pirouette.

While astronomers have calculated the masses of more than a dozen black holes,
spin-speed measurements have remained elusive. Until now, the spin rate of only
one other black hole has been accurately measured, according to the researchers.

"Ever since the community figured out many years ago how to measure black hole
mass, measuring spin has been the holy grail in this field," said Jeffrey McClintock of
the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass.

Powerful tug

A black hole's gravity, at a distance, behaves like that of a star of the same mass. If
the Sun were to suddenly become a black hole, for example, its gravitational effect
on Earth would not change.

"When you take a black hole and you try putting an object into orbit around it, you
have no trouble if you're doing it at a large distance," said CfA's Ramesh Narayan.

But as swirling matter gets closer to a black hole, it starts orbiting faster and faster
until it reaches the jaws of the dark behemoth. Just before the gas and dust get
devoured, the matter heats up to millions of degrees, unleashing jets of X-rays.

The scientists, led by McClintock and Narayan, used NASA's Rossi X-ray Timing
Explorer satellite data to measure this radiation and calculate the area of this disk of
radiation (seen in images as a bright-white ring around the black center).

"Before, [the matter] was swirling around, happily, very slowly just spiraling in, and
then it reaches this radius and then bang, it just freefalls into the black hole,"
Narayan told SPACE.com.

Inside this radius, the gas is falling in so quickly it doesn't send out much radiation.

Spin speed

The faster a black hole spins, the smaller its critical radius. That's because when a
black hole is spinning, it drags space-time around with it. So if surrounding matter is
spinning in the same direction as the black hole, it gets tugged along due to this so-
called frame-dragging effect. "The space is being pulled, so it's helping the particle
go around, so it's able to hang on much closer to the black hole," Narayan explained.

"If a particle is going around a black hole in the same direction as the spin of the
black hole, then it turns out that it can be comfortable. It's able to find a circular orbit
even at much smaller radii," Narayan explained.

They found two of the black holes spin at less than 50 percent of their maximum
rates, while the black hole called GRS1915+105, which has 14 times the mass of the
Sun, rotates between 82 and 100 percent of its maximum spin speed.

Each black hole is part of what's called an X-ray binary system, in which two objects
orbit each other with gas from one-a normal star like the Sun-getting pulled toward
the black hole.

The results are published in today's issue of the Astrophysical Journal.

http://www.space.com...e-galaxies.html

The backward spin of a number of black holes could create mysterious jets of plasma
that control the fate of galaxies, scientists now suggest.

At the heart of galaxies, astronomers have routinely detected what seem to be


supermassive black holes millions to billions of times the mass of our sun. Roughly a
hundredth of these giants spew out jets of plasma that extend out in opposite
directions.

These jets control how stars and other bodies form by injecting huge amounts of
energy into the universe, playing a crucial role in the evolution of clusters of galaxies,
the largest structures in the universe. However, it remains a mystery as to how these
jets form.

To investigate the origin of these powerful jets, scientists compared several dozen
galaxies whose super-massive black holes spit jets to other galaxies whose black
holes don't. All these black holes featured accretion disks ? clumps of gas and dust
whirling into the maws of these dark objects. Scientists have long known that black
holes spin.

Relying on data collected by a Japanese space telescope dubbed Suzaku, researchers


found that jets might form right outside black holes that spin in the opposite direction
from their accretion disks. Such retrograde spin could warp space-time in a way that
forces the innermost portions of accretion disks outward, leading to "a piling of
magnetic fields that provides the force to fuel a jet," said researcher Dan Evans at
MIT?s Kavli Institute for Astrophysics and Space Research.

The scientists looked at light from the super-hot coronas of accretion disks, made of
plasma heated by magnetic fields that lies above and below the disks, sandwiching
them. These coronas generate copious amounts of X-rays that Suzaku can detect.

A fraction of light from the coronas reflects off the accretion disks, resulting in a
distinct pattern called the Compton reflection hump. The majority of a corona's X-ray
emissions should come from near the black hole, where matter from the accretion
disk is falling into the black hole fastest and hottest. As such, the Compton reflection
hump should also be most prominent there.

However, jet-emitting black holes didn't have the Compton reflection hump. This
suggests their accretion disks had no inner regions near the black holes to reflect
light from the corona.

This gap in that black hole's accretion disk could result from a backward whirl.

Supercomputer models suggest that when galaxies collide, the merging of super-
massive black holes can give the resulting giants a decent amount of spin, and
depending on the dynamics of that merger ? for instance, if galaxies of different sizes
collide ?a retrograde black hole could result.

Spinning black holes drag space-time around them, and a retrograde spin would push
out the orbit of the innermost portion of a black hole's accretion disk.

"David Garofalo, a general relativity specialist in our collaboration, has a way to


describe this," Evans said. "Picture trying to get as close to the edge of a ceiling fan
with a pencil in your hand without hitting the fan. It's much easier to get close if
you're co-rotating with the fan, moving the same direction as it, as the fan creates a
sucking effect. If you're moving in the opposite direction, counter-rotating with the
spin of that fan, the air is effectively pushed out at you, generating an opposing
force, and you get much further from that fan. The same thing happens with spinning
black holes, where the force you feel is roughly analogous to the wind."

In the future, Evans said NASA's Nuclear Spectroscopic Telescope Array (NuSTAR), a
satellite planned for launch in 2011, may help astronomers solve this black hole
mystery, being 10 to 50 times more sensitive that current technology.

The scientists detailed their findings in the Feb. 10 issue of the Astrophysical Journal.

http://www.nasa.gov/...grb110328A.html

The center of this image contains an extraordinary gamma-ray burst (GRB) called
GRB 110328A, observed with NASA's Chandra X-ray Observatory. This Chandra
observation confirms the association of GRB 110328A with the core of a distant
galaxy and shows that it was an exceptionally long lived and luminous event
compared to other GRBs.

The red cross (roll your mouse over the image above) shows the position of a faint
galaxy -- located about 3.8 billion light years from Earth -- observed with NASA's
Hubble Space Telescope and the Gemini-North telescope on the ground. Allowing for
experimental errors, the position of the galaxy is indistinguishable from that of the X-
ray source, showing that the source is located close to the middle of the galaxy. This
is consistent with the idea, suggested by some astronomers, that a star was torn
apart by a supermassive black hole at the center of the galaxy. This idea differs from
the usual interpretation for a GRB, involving the production of a jet when a black hole
or neutron star forms after the collapse of a massive star or a merger between two
neutron stars.
Remarkably, this "tidal disruption" event may have been caught in real time, rather
than detected later from analyzing archival observations. However, this X-ray source
is about a hundred times brighter than previously observed tidal disruptions. One
possible explanation for this very bright radiation is that debris from the disrupted
star fell towards the black hole in a disk and the swirling, magnetized matter
generated intense electromagnetic fields that created a powerful jet of particles. If
this jet is pointed toward Earth it would boost the observed brightness of the source.
This scenario has already been suggested by observers to explain the bright and
variable X-ray emission observed by NASA's Swift telescope.

This observation was part of a so-called target of opportunity, or TOO, led by Andrew
Levan from the University of Warwick in the UK. A TOO allows the telescope to react
quickly to unpredictable cosmic events, within 24 hours in some situations. Chandra
scientists and engineers can decide to alter the scheduled observations and instead
point the telescope to another target if the circumstances warrant it. This process
was put into place once the discovery of GRB 110328A with Swift was announced on
March 28th, 2011. The Chandra team was able to reset the telescope's schedule to
observe GRB 110328A early in the morning of Monday, April 4th for a period of just
over four hours.

Credits: NASA/CXC/Warwick/A.Levan et al.

Cosmic burst in distant galaxy puzzles NASA


By Agence France-Presse
Thursday, April 7th, 2011 -- 6:50 pm

WASHINGTON – NASA is studying a surprising cosmic burst at the center of distant


galaxy that has burned for more than a week, longer than astronomers have ever
seen before, the US space agency said Thursday.

Calling it "one of the most puzzling cosmic blasts ever observed," NASA said it has
mobilized the Hubble Space Telescope along with its Swift satellite and Chandra X-
ray Observatory to study the phenomenon.

"More than a week later, high-energy radiation continues to brighten and fade from
its location," NASA said in a statement.

"Astronomers say they have never seen such a bright, variable, high-energy, long-
lasting burst before. Usually, gamma-ray bursts mark the destruction of a massive
star, and flaring emission from these events never lasts more than a few hours."

The first in a series of explosions was detected by a NASA telescope on March 28 in


the constellation Draco.

Astronomers think the the blast occurred "when a star wandered too close to its
galaxy's central black hole," NASA said.

"Intense tidal forces probably tore the star apart, and the infalling gas continues to
stream toward the hole. According to this model, the spinning black hole formed an
outflowing jet along its rotational axis. A powerful blast of X- and gamma rays is seen
when the jet is pointed in our direction."

On April 4, the Hubble telescope spotted the source of the explosion at the center of
a galaxy 3.8 billion light-years away from Earth.
"We have been eagerly awaiting the Hubble observation," said Neil Gehrels, the lead
scientist for Swift at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

"The fact that the explosion occurred in the center of a galaxy tells us it is most likely
associated with a massive black hole. This solves a key question about the
mysterious event."

The Swift telescope has catalogued the event as gamma-ray burst (GRB) 110328A,
alerting worldwide astronomers to its existence for further study.

http://www.rawstory....y-puzzles-nasa/

Baffling blowup in distant galaxy


High-energy blast has gone on for 11 days
By Ron Cowen
Web edition : Thursday, April 7th, 2011

Enlarge
COSMIC FIREWORKSThis view of a puzzling cosmic explosion combines images from
the Swift satellite's ultraviolet/optical telescope (white and purple) and its X-ray
telescope (yellow and red), recorded over a 3.4-hour period on March 28,
2011.Stefan Immler/NASA GSFC, NASA, Swift

Astronomers have witnessed a cosmic explosion so strange they don’t even know
what to call it. Although the blowup, discovered with NASA’s Swift satellite on March
28, emits high-energy radiation like a gamma-ray burst would, the event has now
lasted for 11 days. Gamma-ray bursts last for an average of about 30 seconds.

Also unlike a gamma-ray burst, the explosion has faded and brightened, emitting
staccato pulses of energetic radiation lasting for hundreds of seconds.

Enlarge
DISTANT EXPLOSIONThe Hubble Space Telescope was able to pinpoint the location of
a recently observed cosmic explosion, showing that it took place in the center of a
galaxy that lies 3.8 billion light-years from Earth. This image may support the idea
that the fireworks come from a star that fell into a supermassive black hole at the
core of the galaxy. A. Fruchter/STScI, NASA, ESA

“It’s either a phenomenon we’ve never seen before or a familiar event that we’ve
never viewed in this way before,” says Andrew Fruchter of the Space Telescope
Science Institute in Baltimore. The outburst might have been generated by a star torn
to shreds when it ventured too close to a black hole in its host galaxy, he suggests.
Gas from the star falling into the black hole could have triggered the gravitational
monster to emit a jet of X-rays and gamma rays that by chance happens to point
directly at Earth.

A radio-wavelength image taken May 29 along with a Hubble Space Telescope image
taken in visible light on April 4 supports that model. The images show that the
explosion took place 3.8 billion light-years from Earth, at the center of a galaxy
where a supermassive black hole would lie. It’s also possible that the star might have
been ripped apart by a smaller black hole, Fruchter notes.

“Tidal disruption of a star by a black hole seems very plausible,” says Andrew
MacFadyen of New York University. The blast’s duration “is much longer than
anything we'd naturally expect from [explosive] collapse of a single star,” which is
the traditional model for producing a gamma-ray burst, he says.

But Stan Woosley of the University of California, Santa Cruz says the event might be
explained by the gravitational collapse of a giant star into a black hole, a scaled-up
version of the process that usually produces a gamma-ray burst. In Woosley’s
scenario, the core of the giant star collapses to form a black hole but it takes days for
the outer layers to fall in and emit radiation, accounting for the unusually long
duration of the observed explosion.

http://www.sciencene..._distant_galaxy

http://www.physorg.c...-gamma-ray.html

The engine that powers short gamma-ray bursts

April 8th, 2011 in Space& Earth / Astronomy


The engine that powers short gamma-ray burstsTwo neutron stars merge within
milliseconds to form a black hole. A strong magnetic field is formed along the
rotational axis, which creates a jet that shoots ultra-hot matter out into space.
Gamma-ray bursts can occur in the jet. © L. Rezzolla (AEI)& M. Koppitz (AEI& Zuse
Institute Berlin)

(PhysOrg.com) -- These explosions have been puzzling scientists for years: those
brief flashes of gamma light can in fact release more energy in a fraction of a second
than what our entire galaxy releases in one year – even with its 200 billion stars.
What causes those explosions?

Scientists working with Luciano Rezzolla at the Max Planck Institute for Gravitational
Physics are now one step closer to solving the riddle. In six-week-long computations
they carried out on the Institute’s supercomputer, the researchers simulated the
merger of two neutron stars which have a small magnetic field and which, when
merge, form a black hole surrounded by a hot torus. In this process, an ultra-strong
magnetic field with a jet-like structure is formed along the rotational axis. And it was
this magnetic field that could lie behind the generation of short gamma-ray bursts:
out of the chaos that resulted from the collision, an ordered structure was formed – a
jet in which short gamma-ray bursts can occur.

The first astrophysical gamma-ray explosion was observed by pure coincidence: in


the late 1960s, an American spy satellite looking for evidence of above ground
atomic bomb tests detected the first gamma-ray burst (GBR). It came not from Earth,
but from outer space. Between 1991 and the date it crashed in June 2000, America’s
Compton satellite registered about one GBR per day – yet the cause of these massive
explosions in the universe remained a mystery.

[Video on web page http://www.physorg.c...gamma-ray.html]


State-of-the-art supercomputer models show that merging neutron stars can power a
short gamma-ray burst.

Coalescing neutron stars were believed to be the most likely culprits. However, the
scientists did not understand how the chaos that resulted from the merger of these
20-kilometer wide, extremely dense spheres could produce a stream of gas – a jet –
orientated along the rotational axis. Yet the jet is an essential ingredient in the
occurrence of gamma-ray bursts. So how can the driving force behind the process
create this order and release such enormous amounts of energy?

Luciano Rezzolla, leader of the Numerical Relativity Group at the Max Planck Institute
for Gravitational Physics (Albert Einstein Institute/AEI), has been working with fellow
scientists in an international collaboration, and they have now found an explanation
for the short gamma-ray bursts that can last up to three seconds. The team solved
the Einstein equations and the magnetohydrodynamic equations for two neutron
stars coalescing into a black hole and let the simulation run on much longer
timescales after they merged.

What they found is that the resulting rapidly rotating black hole is initially surrounded
by a ring of hot matter with a relatively weak and chaotic magnetic field. The rotating
movement of this unstable system generates an extremely strong, vertically
orientated magnetic field of 1015 Gauss along the rotational axis. As a comparison,
this magnetic field is 1016 (10,000,000,000,000,000) times stronger than the Earth’s
magnetic field. This highlights the importance of this new result: for the first time it
has been shown that a magnetic jet-like structure can be formed in which the ultra-
hot matter shoots out into space two collimated outflows which can then lead to the
brief flashes in the gamma wavelength.

“This is the first time we have studied the entire process from the merger of the
neutron stars to the formation of the jets,” says Luciano Rezzolla. “This marks a
breakthrough, because we previously did not know how the order that was needed
for the jets to form and the gamma-ray bursts to occur was created out of the
chaos.” Through a considerable computational effort, the scientists run the
simulation for twice as long as normal. The supercomputer Damiana performed its
calculations for a whole six weeks. The complete simulation shows what happens in
just 35 milliseconds.

“We have now lifted an important veil, which was hiding the central engine of short
GRBs and provided a link between the theoretical modelling and the observations, by
showing that a jet-like structure is indeed produced through the self-organization of
the magnetic field in a neutron star merger,” adds Chryssa Kouveliotou from the
American space agency, NASA.

In addition to huge amounts of gamma radiation, a process of this type also produces
gravitational waves in space whose waveform the scientists have simulated. These
tiny ripples in spacetime were predicted by Albert Einstein in his General Theory of
Relativity, but they have never been measured directly. It is hoped that the simulated
gravitational-wave signals will help the scientists discover real gravitational waves in
the data jungle from the detectors. That’s why having a very precise picture of what
they look like increases the likelihood of the researchers actually identifying the
fingerprints of gravitational waves in the detector data.

There are currently five interferometric gravitational wave detectors throughout the
world: the German/British GEO600 project near Hanover, Germany, the three LIGO
detectors in the US states of Louisiana and Washington, and the Franco/Italian Virgo
project in Pisa, Italy. A new space-based detector by the name of LISA (Laser
Interferometer Space Antenna) is also planned by the European Space Agency (ESA)
and NASA, with the launch scheduled to take place in 2020. Scientists from the Max
Planck Institute for Gravitational Physics are playing a leading role in the GEO600 and
LISA projects and are working closely with fellow scientists on the other projects
within the framework of the LIGO-Virgo Collaboration.
More information: Luciano Rezzolla, Bruno Giacomazzo, Luca Baiotti, Jonathan
Granot, Chryssa Kouveliotou, Miguel A. Aloy, The Missing Link: Merging Neutron Stars
naturally produce Jet-like Structures and can power short Gamma-Ray Bursts,
Astrophysical Journal Letters, 732:L6, 2011. http://iopscience. … 205/732/1/L6

Provided by Max-Planck-Gesellschaft

"The engine that powers short gamma-ray bursts." April 8th, 2011.
http://www.physorg.c...-gamma-ray.html

You might also like