You are on page 1of 44

Machine Translated by Google

COLOMBIAN OLYMPICS OF

ASTRONOMY, ASTRONAUTICS AND ASTROPHYSICS

INTRODUCTION OF BASIC CONCEPTS TO TENER EN CUENTA

FOR OLYMPIC STUDENTS

NOT FOR COMMERCIAL USE

Property in Hyperphysics
© CR Nave, 2010
Machine Translated by Google

Main Sequence in the Hertzsprung-Russell Diagram

Around 90% of the known stars are found in the Main Sequence and have luminosities that closely follow the
mass-luminosity relationship. The Hertzsprung-Russell diagram is a graph of temperature vs. luminosity, except
that the temperature decreases to the right in the horizontal axis. The luminosity is expressed both in multiples
of the luminosity of the Sun and in absolute magnitude.

Masa-Luminosidad relationship

In the stars of the main sequence, the brightness increases with the mass with the approximate power ley:

A more conservative approach used in a series of astronomy texts is to use the relationship with power
taking this range of values.

To determine the value of the power, a log-log graph of luminosities and masses can be made. La pendiente
de such graphic da la potencia. Below is a graph of the data included in Harwit and attributed to the Royal
Astronomical Society.
Machine Translated by Google

The power of 3.95 obtained from the upper part of these data shows a focus, but the
range of masses is quite small. The value of 3.5 is the most commonly used, and is used
in estimating stellar longevity.

The Sun in the Main Sequence

Evolution of the Sun to the Main Sequence


Machine Translated by Google

Times of Stellar Lives

The brightness of a star is a measure of its energy production, and therefore a measure of the
speed with which its fuel supply is being used. If the brightness was constant, the lifetime of a
star would simply be proportional to the available fuel mass divided by the brightness. In
addition to these statements, one should rely on the collected empirical data and on the models
of these data, to estimate the lifetime of a given star.

A useful step in the modeling of stellar lives is the mass empirical relationship
luminosity.

As the mass of the star is the fuel for the processes of nuclear fusion, it can then be assumed
that the life time in the main sequence is proportional to the star mass divided by the luminosity.
It depends on the fraction of mass that is actually available as nuclear fuel. A considerable
effort has been made to model this fraction for the Sun, resulting in a solar life of 10 x 109
years. Using this predicted lifespan, the stellar lifespan can then be expressed.

as:
Machine Translated by Google

It is understood that this is only an approximate model and that the number of digits in the
model calculation is not really significant. If we have in mind the range of potencies
commonly associated with the potency formula, it is possible to appreciate the uncertainty
in the model's time of life.

El Limit de Chandrasekhar for the Enanas Blancas

The calculation of the maximum mass of 1.44 solar masses for a white body was done by
Subrahmanyan Chandrasekhar on a boat, during the journey from India to England to start
postgraduate studies in physics at the University of Cambridge!. This initial calculation was
performed when it was only 20 years old and carefully refined when it was 22 years old!.
The name of the limit of his discoverer seems particularly appropriate in light of the intense
personal history that surrounds him. Chandrasekhar was interested in the final states of the
collapsed stars as determined by electron degeneration and had used the work of Arthur S.
Eddington and Ralph H. Fowler to start his calculations. It is reported that relativity had not
been included in their calculations. When they reviewed their relativity equations to include
them, they found that over a certain limit, there was no solution. This implied that for masses
above 1.44 solar masses, there could not be a balance between the degeneration of
electrons and the force of gravitational crushing, and the star would continue to collapse.

The pathetic part of the situation of this essentially self-taught young man was that the
physicist Eddington strongly resisted his ideas for years!. Eddington's public and vocal
opposition made Chandrasekhar's life so difficult, that 29 years ago he wrote a definitive
book on the subject of stellar structure, deciding to close this issue and turn to other interests.
In the process, I produced a work that defined the theme during the years since, and is
considered a classic.

In favor of Eddington, who later recognized the value and accuracy of Chandrasekhar's work
and wrote about the notable white work of Sirio-B: "The message from the compañera de
Sirio when it was deciphered read: 'This is composed of material 3,000 times more dense
that all that you have found, a ton of my material would be a small nugget that could be put
in a box of clay'. What response could you give to such a message? : ¡Cállate, don't say
silly.!'".

Chandrasekhar himself had no idea of what would happen when the limit of 1.44 solar
masses was exceeded, except that the star would continue to collapse. Our current
knowledge is that the collapse will continue until it is stopped by the degeneration of
neutrons, with the formation of a neutron star. But even this is not the last limit, ya
Machine Translated by Google

that the degeneration of neutrons can also be overcome by masses greater than 3 solar masses and the final
collapse would be a black hole.

Mathematical Expression

The explicit calculation of the Chandrasekhar limit depends on certain details related to the composition of the
atomic nuclei that form a star Chandrasekhar provides the following expression, based on the state equation of
an ideal gas of
Fermi:

where:

is the rationalized Planck constant = It is the rationalized Planck constant ( ÿ =

h/2ÿ = 1'054589 × 10-34 Jules second) and is the speed of light (299,792,458 m/s).

what about the speed of light

G is the gravitation constant

ÿe is the molecular weight per electron, which depends on the chemical composition of the
star

mH is the mass of the hydrogen atom.

is a constant related to the solution of Lane Equation

Emden.

place that it is the mass of Planck, the limit can be written as:
Machine Translated by Google

In summary the Chandrasekhar Limit Vs Tolman-Oppenheimer-Volkoff Limit

The limit of Chandrasekhar is the limit of mass, in addition to which the degeneration of
electrons is not able to counteract the force of gravity in a stellar remnant, producing a
collapse that gives rise to a star of neutrons, a black ray or a star of quarks. The limit value
of Chandrasekhar is proportional to the square of the mass fraction of the electrons.
(Chandrasekhar's limit is equivalent to 1.44 solar masses).

The Tolman-Oppenheimer-Volkoff limit was calculated by Juliues Robert Oppenheimer and


George Michael Volkoff in 1929, using earlier work by Richard Chace Tolman.

This limit is used with neutron stars, and its value has been calculated to be between 1.5
and 3 solar masses. If a neutron star exceeds the TOV limit, it will collapse, and will give a
black augury, or its composition and support to some other mechanism (for example, it will
convert into a quark star).

Black omens formed by the collapse of individual stars have a mass in the range of 1.5-3.0
(TOV limit) to 10 solar masses.
Machine Translated by Google

Cosmic Rays

Cosmic rays is the term given to the high-energy radiation that reaches Earth from space.
Some of them have ultra-high energies in the range of 100 -
1000 TeV. Such extreme energies come from the ground at strong points, such as Cygnus
X 3. The maximum energy distribution is approximately 0.3 GeV.

The intensity of cosmic radiation increases with altitude, which indicates that it comes from
outer space. Cambia con latitud, which indicates that it consists at least in part of charged
particles that are affected by the earth's magnetic field. The illustration of the right shows
that the flow of detected cosmic rays has its maximum at about 15 km of altitude and then
drops sharply (nothesis of the logarithmic scale of the altitude). This type of variation was
discovered by Pfotzer in 1936. It is suggested that the detection method used was mainly
the detection of secondary particles, rather than the primary particles that reach the Earth
from space.

The analysis of the populations of particles of cosmic rays provides clues about their origins.
Machine Translated by Google

The Particles of the Cosmic Rays

Casi the 90% of the cosmic rays that strike the earth's atmosphere
protons (hydrogen nuclei), and 9% are alpha particles. According to Chaisson and McMillan,
the amount of electrons is approximately 1%. There is a small fraction of heavier particles
that produce interesting information. All around 0.25% are light elements (lithium, beryllium
and boron), however, this constitutes a great wealth in comparison with the abundance of
these elements in the universe that is only around 1 thousand million! From this evidence, it
can be understood that these light elements have been produced as fragments in high-speed
collisions, when particles from primary cosmic rays, such as protones, strike heavier
elements of the very tenuous material of interstellar space, such as carbon and el oxygen.
There are a number of attempts to model the amount of ordinary material that would be
needed over the course of a collision course to produce the observed population of these
light elements. One study suggested that it would be a matter of passing through a mass
equivalent to 4 cm of water depth.

Of the medium elements (carbon, nitrogen, oxygen and fluorine) there are around 10 times
more than their abundance in normal matter, and the heaviest elements are present with an
increment of around 10 times more than in normal matter.
This suggests that the origin of cosmic rays are areas of space with cantities very enriched
with heavy elements. The density of cosmic rays in interstellar space is estimated around
10-3 /m3 .

An interesting aspect of cosmic rays is that they are matter casi in their totality, rather than
antimatter. According to Carroll & Ostlie, only approximately 0.01% of cosmic rays are
antimatter, so this sample of particles from our galaxy provides evidence of the material-
antimatter asymmetry in the mix and probably in the entirety of the universe. The observed
antiparticle puddles can be explained as a result of high-energy collisions of particles that
produce particle-antiparticle pairs.

High energy collisions in the upper atmosphere produce cascades of lighter particles. Piones
and kaones that decay are produced, producing muons. The muons conform more to the
mitad of cosmic radiation at sea level, with the rest being mainly electrons, positrons and
photons from cascading events.
(Richthyer).
Machine Translated by Google

El Viento Solar

The Sun gradually loses mass in the form of high-speed protons and electrons that escape
outside the outer layers of the Sun. This flow of particles is called solar wind. It can be
considered as a kind of "evaporation" of particles from the corona. The corona reaches a
temperature around one million degrees Kelvin, at a distance of 10,000 km above the
photosphere. Such hot gas would have a thermal energy of about 130 ev, and the average
speed of the hydrogen nuclei of such gas, if it is considered to have a Maxwell speed
distribution, it is approximately 145 km/s. The escape velocity from the surface of the Sun
is approximately 618 km/s, which is why hydrogen atoms have an average velocity in the
escapement. Considering the nature of the velocity distribution, it is known that there will
be few with velocity superior to the exhaust velocity. Chaisson and McMillan characterize
mass loss as surrounding one million tons of solar material per second. He pointed out that
at this rate, through this mechanism, in the lifetime of the Sun of 4.6 thousand million years,
less than 0.1% has been lost.
Machine Translated by Google

If a planet has a magnetic field, it will interact with the solar wind to deflect charged particles
and form an enlarged cavity in itself. This cavity is called the planet's magnetosphere.

In the vicinity of the Earth, the particles of the solar wind travel at about 400 km/s. They are
braked by the interaction with the Earth, producing a shock wave in the form of
arc around the Tierra.

Within a limit called the magnetopause, the Earth's magnetic field is dominant over the
effects of the solar wind. The small fraction of charged particles that filters through the
magnetopause is trapped in the large donut-shaped rings called the Van Allen belts.

The solar wind was detected for the first time, directly by the spacecraft Mariner 2.
It has been studied in more detail by the SOHO satellite.
Machine Translated by Google

Kinetic Temperature

The expression for gas pressure developed from the kinetic theory relates pressure and
gas volume to average molecular kinetic energy. The comparison with the ideal gas law leads
us to a temperature called a few times temperature
kinetics.

This leads us to the expression

The most familiar form expresses the average molecular kinetic energy:

It is important to point out that the average kinetic energy used here is limited to the kinetic energy
of the molecules' translation. That is to say, they are treated as punctual masses and there is no
account of the degrees of internal freedom, such as molecular rotation and vibration. This
distinction is very important when dealing with topics such as the specific heats of gases. When
trying to evaluate the specific heat, one must have in mind all the energy that the molecules have,
and the temperature as ordinarily measured in the account of the rotation and molecular vibration.
The kinetic temperature is the necessary variable for topics such as heat transfer, which is the
kinetic energy of transfer, which leads to the transfer of energy, from hot zones (higher kinetic
temperature, higher molecular velocities), to cold zones (lower velocities molecules), in a direct
collisional transfer.
Machine Translated by Google

Molecular Speeds

From expression to kinetic temperature

Pouring the root average square (rms) of the molecular velocity:

From Maxwell's velocity distribution, this velocity can be calculated, so as the average velocity and the most
probable velocity.

Maxwell's Speed Distribution

The distribution of velocities in the molecules of an ideal gas is given by

From this function it is possible to calculate several characteristic molecular velocities, and things such as what
fraction of the molecules have velocities greater than a certain value at a given temperature. It also has to do with
many types of phenomena.

Note that M is the molar mass and in the expression the gas constant R is used. If the
mass m of an individual molecule were used instead, the expression would be the same
except that it would contain the Boltzman constant k, in instead of the molar gas constant R.
Machine Translated by Google

Boltzman Distribution Development

When the number of particles is large, the statistical method becomes the most accurate way to study nature.
So we hope that these three values of velocities of the molecules of a gas will be the most probable distribution,
since we are dealing with a number of particles in the range of the Avogadro number. But this most probable
distribution (the Maxwell-Boltzman distribution) is subject to limitations, namely; the number of particles is
constant, and the total energy is constant (energy conservation). The calculation with greater precision of the
probability distribution subject to these limitations in general, assumes an arduous mathematical task (see for
example Richtmyer, et al.). One way to approach the solution in a more intuitive way is to face it with a physical
problem that we know -called physics of the atmosphere under the influence of gravity-, as reflected in the
barometric formula. The next studio follows the development of Rohlf.

In this approach, we use the idea that the average kinetic energy of the molecules can be expressed as a
function of the kinetic temperature. Furthermore, we know that in this case, the conservation of energy
implies exactly a balance between the kinetic energy and the gravitational potential energy, whenever we
treat the atmosphere as an ideal gas.

From the kinetic temperature expression

We have an experimentally proven expression of molecular kinetic energy. In barometric formula:

We have a description of an ideal gas system that can be used to help develop a plausible argument for
Maxwell's velocity distribution. The steps in these processes are as follows:

For a direction in space, this process leads to the expression:


Machine Translated by Google

and when all speed directions are included, convert to Maxwell's Speed Distribution formula:

Why is this formula Inexact for Higher Speeds, Mientras que la Anterior No lo es?

It should be noted that even though we use a physical situation dependent on gravity to
obtain the speed distribution, it does not appear in the final result. Es decir, the result
obtained is general, does not contain g. The barometric formula was used simply as a
model to obtain the velocity distribution with the energy limitations and the number of
particles.
Machine Translated by Google

Quasars or QSOs

The quasares have large red spaces, which indicate a great distance from the land, but they
have a variability with periods of weeks or months, which indicates that they are small. Their
size is the order of weeks of light, but they are brighter than our galaxy, which has about
100,000 light years in diameter. The red desplazamiento al rojo is generally indicated in terms
of the z parameter, which is defined by

y el range of > 100 observed quasars and s z = 0.16 to 3.53. The v/c calculation of the

The name of "quasar" derives from the first description "QUAsi-StellAR-Objects"


(Castellar object). This description was used because they appeared to be weak stars, but they had
huge red spaces, which indicated that they were something else.

More about Quasars

The red displacement observed from the quasar corresponds to a speed range of 0.15c to 0.91c.
Using a Hubble constant of 55 km/s per megaparsec gives a distance of 2.6 to 16 thousand million
light years for these cuasars.

The witness of los quasares, suggests that you have a greater luminosity than our entire galaxy of
200,000 million stars. The turbulent velocities in the quadrants are above a few tens of thousands
of m/s, which suggests that explosions are constantly being produced, that is to say, that this is the
type of turbulent velocity in a chemical reaction created by a powerful bomb. Some of the quasares
have a few light-diameter days, such as demonstrating their periods of variability, and without
embargo, they are much brighter than our galaxy that is 100,000 years old

diameter light. They are as small as the solar system. the source of

energy that is suggested to have, is a black agujero with several thousands of millions of
solar masses.
Machine Translated by Google

Models of the First Events

To model the cosmology of the Big Bang in the first times of the indicated in the First
Three Minutes, by Weinberg, we have proposed certain time regimes with the types of
events that could have happened in those moments.

Before the Time of Planck 1

Planck's Age of Time 1

Separation of the Fuerzas Fuertes

Inflationary Period

Quark-antiquark period

Quark Confinement

Before the Time of Planck 1

Before the time classified as a Planck time, 10-43 seconds, all four fundamental powers
were assumed to be unified in a single power.
All matter, energy, space and time were supposed to have been fired from the outside from
an original singularity. Nothing is known of this period.

Nor do we separate much about later periods, it is only that we do not have true coherent
models of what could happen under these conditions.
The electrical unification has been supported by the discovery of W and Z particles, and can
be used as a platform for debate on the next step, the Theory of Gran Unification (GUT).
The final unification has been called "theory of supergran unification", and increasingly
popular is the so-called "Theory of the Whole" (TOE). However, "theories of the whole" are
separated by the great leaps, but there are also the experiments that can be wished to carry
out in the Tierra.

Planck's Age of Time 1

In the environs of the time of Planck 1, 10-43 seconds, it is projected by the current model
of the fundamental forces, that the force of the seriousness begins to differentiate itself from
the other three forces. This is the first of the spontaneous ruptures of symmetry, which lead
to the four types of interactions observed in the current universe.
Machine Translated by Google

Looking back, the general idea is that beyond the time of Planck 1, we cannot make
significant observations in the framework of classical gravitation. A way of approaching
Planck's formulation of time is presented by Hsu. One of the characteristics of a black
agujero is that there is a horizon of successes from which no information can be obtained -
smaller scales of this are hidden from the outside world -. For a given closed mass, this limit
is the order of

where G is the gravitational constant and yc is the speed of light. But from the beginning of
uncertainty and De Broglie's wave length, we can infer that the smallest scale that we could
locate the horizon of successes would be the Compton wave length.

Equating L and ÿ, we obtain a characteristic mass called the Planck mass:

Replacing this mass in one of the expressions of the longitude, of the longitude of Planck

and the time of travel of the light through this length is called the Planck time:

It is understood that this is a characteristic time, so its order of magnitude is what must be
kept in mind. Sometimes it is defined with the wavelength of the top wave divided by 2ÿ, so
you don't have to worry about the number of significant digits
Machine Translated by Google

Separation of the Fuerte Interaction

In a time around 10-36 seconds, the current models project the separation of the strong force, one of the four
fundamental forces. Before this time, the other powers except for the grave would unify in what is called the gran
unification.
The spontaneous rupture of the symmetry that takes place at this time will be distinguished as an independent
interaction with strength that will keep the nuclei together in later times.

In the 1970s, Sheldon Glashow and Howard Georgi proposed the great unification of strong, weak and
electromagnetic forces in energy above 1014 GeV. If the ordinary concept of thermal energy is applied on such
occasions, a temperature of 1027 K would be required for the average energy of the particle to be 1014 GeV.

Although the strong power was different from the serious and the electrical power at this time, the energy level
is too high for the strong power to be able to
Keeping the protones and neutrons together, so the universe continues to be a "sizzling sea of quarks".

Inflationary Period

Caused by the rupture of the symmetry that separates the strong nuclear force, the models indicate an
extraordinary inflationary phase in the era ranging from 10-36 seconds to 10-
32 seconds. It is assumed that more expansion occurred at that moment than in the entire period since then (14
thousand million years).

The inflationary period could have expanded the universe by 1020 or 1030 in this incredibly brief time. The
inflationary hypothesis offers a way to deal with the problem of the horizon and the problem of the plane of
cosmological models.
Machine Translated by Google

Lemonick and Nash in a publicity article for Time magazine describe inflation as an "original
Big Bang outbreak" in the following way: "when the universe was less than a thousand
millionths of a thousandths of a thousandths of a second of age , went through a brief period
of extraordinary expansion, inflating from the size of a proton to the size of a pomelo (and
therefore expanding to much, many times the speed of the light). Improbable as the theory
sounds, it has been maintained in each observation of astronomers has succeeded.”
Inflationary theory Chronology of the Primitive Universe Inflationary implications of WMAP

Lemonick and Nash in a popular article for Time describe inflation as an "amendment to the
original Big Bang" as follows: "when the universe was less than a billionth of a billionth of a
billionth of a second old, it briefly went through a period of superchanged expansion,
ballooning from the size of a proton to the size of a grapefruit (and thus expanding at many,
many times the speed of light). Then the expansion slowed to a much more stately pace.
Improbable as the theory sounds, it has held up in every observation astronomers have
managed to make."

Quark-antiquark Period

As the inflationary period ends, the universe consists mostly of energy in the form of photon ,
and those particles
enormous which
energy
existdensity.
cannot They
bind into
would
larger
existstable
as a collection
particles because
of quarksofand
the
antiquarks along with their exchange particles, a state which has been described as a
"sizzling sea of quarks". This time period is estimated at 10-32seconds to 10-5 seconds.
During this period the electromagnetic and weak forces undergo the final symmetry break,
ending
the electroweak unification at about 10-12 seconds.

Quark Confinement

When the expansion of the "primordial fireball" had cooled it to 1013 Kelvin, a time modeled
to be about 10-6 seconds, the collision energies had dropped to about 1 GeV and quarks
could finally hang onto each other to form individual protons and neutrons (and presumably
other baryons.) At this time, all the kinds of particles which are a part of the present universe
were in existence, even though the temperature was still much too high for the formation of
nuclei. At this point we can join the standard big bang" model as outlined by Steven
Weinberg in The First Three Minutes.
Machine Translated by Google

Points of Lagrange of the Tierra-Luna System

A mechanical system with three objects, for example La Tierra, la Luna and el Sol, constitutes
a problem of three bodies. The problem of tres cuerpos is famous both in the circles of
mathematics and physics, and mathematicians in the 1950s finally achieved an elegant
problem that it is impossible to solve the problem. However, the approximate solutions can
be very useful, especially when the masses of the three objects are very different.

For the Sol-Tierra-Luna system, the mass of the Sun is so dominant that it can be treated as
a fixed object, and the Tierra-Luna system is treated as a system of two bodies from the point
of view of a frame of reference that orbits the surroundings. del Sol. The mathematicians of
siglo 18, Leonhard Euler and Joseph-Louis Lagrange discovered that there are five special
points in this rotating frame of reference, where a gravitational equilibrium could be
maintained. Es decir, an object placed in any of these five points in the rotating frame, could
remain there with the effective forces with respect to this canceled frame. Such an object
could then orbit the Sun, maintaining the same relative position with respect to the Tierra-
Luna system. These five points are called Lagrange points and are numbered from L1 to L5.

The points of Lagrange L4 and L5 constitute points of stable equilibrium, so that an object
placed there would be in a stable orbit with respect to the Earth and the Luna. With small
exits from the L4 or L5, there was an effective restoration force, to return the satellite to a
stable point.
Machine Translated by Google

The L5 point was the object of a great proposal to found a colony in "The High Frontier" by
Gerard K. O'Neill and a great effort was put in the 1970s to work on the engineering details
for the creation of such a colony . There was an active "Sociedad L5" that promoted O'Neill's
ideas. The L4 and L5 points form an equilateral triangle with the Sun and the Tierra-Luna
system.

The points of Lagrange L1, L2 and L3 do not seem to be so useful because they are points
of unstable balance. As well as the balance of a pencil supported on the point, maintaining
a satellite in these points is theoretically possible, but any disturbing influence can lead to
the balance. However, in practice it has been proved that these points of Lagrange are really
very useful and that we can put a satellite orbiting one of these points with very little energy
expenditure. These points have provided useful places to park a space vehicle for
observation. These orbits around L1 and L2 are sometimes called halo orbits.

L3 is located on the opposite side of the Sun from Tierra, so it is not easy to use. It could be
a good place to hide something that would never be seen. -¡a fertile ground for science
fiction!-.

The Lagrange L2 point was used for the Wilkinson Microwave Anisotropy Probe (WMAP).
As L2 is positioned outside the Earth's orbit, the WMAP can face at the same time the Sun
and the Earth, an important feature of the deep space probe, which can employ ultra-
sensitive detectors, without the risk of blinding and looking at Sun or Earth.

The Equipotential Surfaces of Tres Corps

A mechanical system with three objects, for example La Tierra, la Luna and el Sol,
constitutes a problem of three bodies. The problem of tres cuerpos is famous both in the
circles of mathematics and physics, and mathematicians in the 1950s finally achieved an
elegant problem that it is impossible to solve the problem. However, the approximate
solutions can be very useful, especially when the masses of the three objects are very
different.

Lagrange's contributions were to draw the contour of equal gravitational potential energy in
systems where the third mass was very small in comparison with others. The continuation
presents a sketch of such equipotential contours for a system such as the Tierra-Luna
system. The equipotential contour, which draws a figure of 8 around both masses, is
important in the evaluation of scenarios where one part of the mass that goes to the other.
These equipotential loops form the base for the Roche lobe concept.
Machine Translated by Google

Contours of Equal Gravitational Potential

One of Lagrange's observations of potential contours was that there were five points in
which the third body could be in balance, the points that are now called points of Lagrange.

Los Puntos de Lagrange for a System like Tierra-Luna

The points of Lagrange L1, L2, and L3 are points of unstable equilibrium. As well as the
balance of a pencil supported on the point, maintaining a satellite in these points is
theoretically possible, but any disturbing influence can lead to the balance. It should be noted
that the points of Lagrange L4 and L5 for small masses are points of stable equilibrium in the
system of three bodies and this geometry of the three bodies can be maintained while M2
orbits around M1.
Machine Translated by Google

Travel to Centro de la Tierra

Assuming that you could shape the Earth to make an idea through the center and that place
you left to fall into it. How long would it take for you to show up on the other side?
land?

Your initial acceleration would be on the surface the acceleration of the grave.

but the acceleration will take place progressively, but it will be smaller as you approach the
center. Your weight will be zero when you pass through the center of the Tierra. For our
hypothetical trip, we will assume that the land has a uniform density and we do not take into
account the friction of the air in the high temperature of this path.

For a spherically symmetrical mass, the net weight of the grave on an object will be given only,
the land mass in the interior of the radio where the object is located and should act as if it were a
material point located in the center. when is it
Machine Translated by Google

Analyze in detail, if you find that the severity in any point of radio travel r smaller than
RTierra, will be linearly proportional to the distance to the center of the Tierra.

About a Masa Fuera de la Cascara


Gravity force of a spherical shell
About a Masa Inside the Cascara

Taking positive r as the direction for the fuera del centro de la Tierra:

This takes the same form as Hooke's Ley for a mass on a muelle. This will cause the cross-
country traveler to oscillate on a one-way trip and a trip across the center of the Tierra, like
a mass going up and down next to a muelle. The angular frequency and the period of
oscillation are:
Machine Translated by Google

In this case this period of oscillation is:

The traveler accelerates towards the center of the Tierra and is momentarily pregnant when
passing through the geometric center at 7900 m/s at 17,700 miles/hr.
About. The trip will take about 42 minutes on the opposite side of the Tierra. But unless someone
waits for him to catch him, he will fall again on a return trip and will continue to fluctuate with a
time of 84.5 minutes.

As an additional feature of this fantasy trip, suppose we put a satellite in a circular orbit just above
the surface surrounding Tierra.
It ignores the rozamiento del aire and the terrible sonic boom that would accompany such an orbit.
Suppose it passes over just at the moment that the persona appears for the agujero.
The period of the orbit is such that every time the person appears above the agujero, the satellite
is found above his head on each side of the Earth.

The orbit period is calculated from:

That is the same period of the oscillating traveller.


Machine Translated by Google

Stefan-Boltzman's Law

The energy radiated by a black body radiator per second, per unit of surface, is
proportional to the fourth power of the absolute temperature and is given by

For hot objects other than ideal radiators, the law is expressed in the form:

where es la emisividad del object (e = 1 for the ideal radiator). If the hot object is radiating
energy from its surroundings at a temperature Tc, the net radiation loss rate takes the form:

The Stefan-Boltzman formula, too, is related to the energy density in the radiation in a
given space volume.

heat radiation

Radiation is the transfer of heat by the emission of electromagnetic waves, which


transport energy outside the emitting object. For ordinary temperatures, (less than "red
hot"), the radiation is in the infrared region of the spectrum.
electromagnetic. The formula that governs the radiation of hot objects is called the Stefan-
Boltzman law:
Machine Translated by Google

Black Body Radiation

The "Radiación de cuerpo negro" or "radiation de cavidad" refers to an object, the system that absorbs all the
radiation incident on it, and re-radiates energy that is characteristic only of this radiant system, regardless of the
type of radiation that falls on her. The radiated energy can be considered to be produced by standing waves, the
resonant modes of the cavity that is radiating.

The amount of radiation emitted in a given frequency range must be proportional to the number of modes in that
range. Best of classical physics, it suggested that all modes had the same opportunity to be produced, and that the
number of modes would increase proportionally to the square of frequency.

However, the predicted continuous increase in radiated energy with respect to frequency (called
"ultraviolet catastrophe") did not occur. La Naturaleza es knew.

Cavity Modes

The mode of an electromagnetic wave in a cavity, must satisfy the condition of electric field zero on the wall.
If the mode is of shorter wave length, there are more ways to occupy the cavity to comply with this condition.
The careful analysis of
Machine Translated by Google

Rayleigh y Jeans showed that the number of modes was proportional to the square of
frequency.

Planck's Radiation Formula

From the assumption that the electromagnetic modes in a cavity were quantized in energy,
with a quantum energy equal to Planck's constant multiplied by the frequency, Planck
derived a formula for radiation. The energy measured by "mode" or "quantum", is the energy
of the quantity, multiplied by the probability that it is occupied (the Einstein-Bose distribution
function):

This average energy is multiplied by the density of these states, expressed in terms
of frequency or wavelength.

From the energy density, the Planck radiation formula.


Machine Translated by Google

Planck's radiation formula is an example of the energy distribution according to Bose-Einstein


statistics. The top expressions are obtained by multiplying the density of states in terms of the
frequency or the wavelength, by the energy of the photon, and by the Bose-Einstein distribution
function, with the constant of
normalization A=1.

To find the energy radiated per unit area from a surface at this temperature, multiply the energy
density by c/4. The top density is for the thermal balance, so establishing interior=exterior gives
a factor of 1/2 for the radiated energy outside. It is necessary to average over all angles, which
gives another factor of 1/2 for the angular dependence, which is the square of the

cosine.

Rayleigh-Jeans vs Planck

The comparison of the classic Rayleigh-Jeans law and the quantum Planck radiation formula. The
experiments confirm the Planck formula.
Machine Translated by Google

Development of the Rayleigh-Jeans Ley

The Rayleigh-Jeans Law was an important step in our understanding of the balance of the radiation of a hot object,
although it did not result in an accurate description of its nature. Careful work on the development of the Rayleigh-Jeans
law provided the basis for the quantum understanding expressed in the Planck radiation formula. In summary form, these
are the steps that led to the Rayleigh-Jeans law.

The equilibrium of standing wave electromagnetic radiation, in a cubic cavity of dimension


L, must fulfill the condition:

The number of modes in the cavity are:

The number of modes per unit wavelength is:

The energy per unit volume per unit wave length is:

The average radiated energy per unit wavelength is:

Location, expressed in terms of frequency is:


Machine Translated by Google

Black Body Intensity as a Frecuencia Function

The Rayleigh-Jeans curve is in agreement with the Planck radiation formula for large
wavelengths, low frequencies.
Machine Translated by Google

La Radiación de Fondo Cosmico 3K

The black body radiation is seen as a remnant of the point of transparency in which, the
expansion of the universe fell by under 3000K, in such a way that the radiation could escape.

La Radiación de Fondo 3K

In all directions of space, a uniform background radiation is observed, which is in the


microwave region of the spectrum. Shows the dependence of the "black body" radiator at a
temperature of 3 Kelvins, of the wavelength. It is considered that it is the remnant of the
radiation emitted at the time when the expansion of the universe became transparent at a
temperature of about 3000K. The discovery of 3K microwave background radiation was one
of the crucial steps leading to the calculation of the "Big Bang" standard cosmology model,
and its role in predicting the relative populations of estimated particles and photons. Recent
investigations using the absolute infrared spectrophotometer (FIRAS), aboard the COBE
satellite, have given a temperature of 2.725 +/- 0.002 K. Previous experiments had shown
some anisotropy of background radiation, due to the movement of the

solar system, but COBE records data, showing fluctuations in the cosmic background. In the
Cosmology of the Big Bang, some fluctuations in the cosmic background are necessary,
which provide enough non-uniformity, to give rise to the formation of the galaxies. The
apparent uniformity of background radiation is the basis of the "galaxy formation problem" in
the Cosmology of the Big Bang. The most recent WMAP mission provided an image with a
much higher resolution of the anisotropies in the radiation of the cosmic background.
Machine Translated by Google

The round figure of 109 photons per nuclear particle is the "most important quantitative conclusion
that emerges from measurements of microwave background radiation..." (Weinberg p66-70). This
allowed us to reach the conclusion that the galaxies and the stars could not have started to form until
the temperature dropped below 3000K. The continuation, if they could form the atoms and eliminate
the opacity of the expansion of the universe, the light could come out and relieve the pressure of
radiation. The formation of stars and galaxies could not occur, until the gravitational attraction could
overcome the pressure of radiation on the outside, and 109 photos/baryons, a "Jean's Mass" would
be needed. . With the formation of atoms and a transparent universe, the mass of Jeans dropped to
about 10-6 the mass of a galaxy, allowing the gravitational grouping.

Role of 3K in Cosmology

The 3K cosmic background provides fundamental evidence for cosmological models. The 3K
background implies approximately 5.5 x 105 photons/liter. This is based on the radiation energy
density and the average energy per photon at this temperature. The estimated density of baryons
has a range that is double the critical density at 6 x 10-3 /litre, at the estimated lower end of a visible
galaxy, 3x10-5 /litre. This is a range that goes from 1x108
, at 2x1010 photons/baryons. This is
estimated by the number of photons per baryon, which was crucial in the big bang calculations. In the
modeling of nucleosynthesis in the big bang, including the proportion of hydrogen/helium, the relative
population of baryons and photons, is in agreement with the observations.

7
When tracking and examining the cantities of D, 3He, and the big I read, and form part of the model
bang, the ratio of baryons to photons is more severely limited. The Particle Data Group of a proportion
of baryons/photons ÿ between

2.6 x 10-10 < ÿ < 6.3 x 10-10 baryons/photons

As the conservation of the number of baryons is a strong conservation principle, it is inferred that the
ratio of photons to baryons is constant throughout the expansion process.
No one knows the process in nature, changes the number of baryons.

Background Anisotropy 3K

In the cosmic microwave background radiation, there is an anisotropy of about 0.1%, which is
attributed to the Doppler shift caused by the movement of the solar system through the radiation. The
Particle Data Group informs that the asymmetry has mainly a dipolar nature, with a magnitude of 1.23
x 10-3. This value is used to calculate
Machine Translated by Google

the speed of approximately 600 m/s for the Tierra, in comparison with an observer that was maintained online
with the general expansion.

Fluctuations in the Background 3K

The COBE satellite, with the use of a microwave differential radiometer, has been discovered
fluctuations in cosmic microwave background radiation. The size of them

fluctuations on ÿT/T = 6x10-6 . This is just above the level in which the big bang cosmological calculations had
gotten into trouble. The scale of fluctuations is greater than the horizon at the time when the background radiation
was emitted, which indicates that the fluctuations are primordial, dating from a time before the separation of
radiation and material, the point of transparency. The "horizon" is the distance within the time, it can have causal
relationships, it is to be decided, within the time

of transit of the light of each one.

El Satellite COBE

NASA's Cosmic Background Exploration Satellite (COBE) was launched to explore the cosmic microwave
background radiation. The data points are shown superimposed on the curve of a theoretical black body.

The adjustment with the Planck radiation formula is so precise, which provides a powerful confirmation of the idea
that it is a remnant of the expansion of the big bang.

The COBE data have been so precise that fluctuations in this radiation have been discovered, which are important
for the cosmological calculations of the big bang. COBE has three main instruments, a Differential Microwave
Radiometer, a Lejanos Infrarrojo Absolute Spectrophotometer (cooled to 1.6K by liquid helium), and the Diffuse
Infrarrojo Fondo Experimenter, also at 1.6K. The infrared instrument
Machine Translated by Google

will measure infrared background spectra, which are assumed to be uniform, but whatever
This unexpected variation could indicate the presence of energy sources that could have driven turbulences to
trigger the formation of galaxies. The sensitivity of infrared instruments is 100 times greater than what can be
achieved from the surface of the Earth. El Experimentador de Fondo de Infrarrojos will look at distant primordial
galaxies and other celestial objects that were formed after the Big Bang.

The Cosmological Constant

Einstein proposed a modification of Friedmann's equation that models the expansion of the universe. It added
a term that is called a cosmological constant, which
Friedmann equation in form

The original motivation for the cosmological constant was to have a static universe that was isotropic and
homogeneous. When the expansion of the universe was established without doubt, Einstein supposedly saw the
cosmological constant as "the worst mistake he has committed". But the idea of a cosmological constant is an
object of active debate. Rohlf suggests that the physical interpretation of the cosmological constant was that the
fluctuations of the void affected the space-time. From measurements of the volume density of distant galaxies, a
different value of zero could be inferred for the cosmological constant, but these measurements have given a
negative result, showing an upper limit of:

This implies that on the scale of the entire universe, the effects of fluctuating the void cancel out. This evaluation
takes place at a time when the theoretical calculations suggest contributions of quarks to the fluctuations of the
-two

void, of the order of 10-6 m .

Friedmann's Ecuacion

Alexander Friedmann of Russia is believed to be the development of a dynamic equation of

the expansion of the universe in 1920. This was a time when Einstein, Willem de Sitter from the Netherlands,
and Georges Lemaitre from France, were also working on equations to model the universe. Friedmann developed
it as a relativistic equation in the framework of general relativity, but the description here is limited to a simplified
relativist version, based on Newton's laws.
Machine Translated by Google

The convenient forms of Friedmann's equation with which to examine the temperature and
the expansion time for the Big Bang model of the universe are

In addition to the density and gravitation constant G, the equation contains the Hubble H
parameter, a scale parameter R, and a k factor that is called a curvature parameter. The
curvature parameter indicates whether the universe is open or cerrado. The previous
equations do not specify the nature of the density ÿ. They do not include the possible
interactions of particles that do not have the gravitational attraction. Such particle
interactions, such as collisions, could be specified in terms of pressure, so the previous
model sometimes refers to a universe "without pressure". The most detailed versions of
Friedman's equation include effects tales.

Einstein considered adding a new term, the famous (infamous) cosmological constant that
would produce a static universe.
Machine Translated by Google

Hubble's Law

Hubble's law is a statement of a direct correlation between the distance to a galaxy and its receding velocity,
determined by red displacement. if you can
establish as

The value of the calculated Hubble parameter has changed a lot in recent years, making it difficult to measure
astronomical distances. But with the high-precision experiments after 1990, the range of reported values has
been reduced to a large extent.
measured at values in the range:

A problem frequently mentioned by Hubble's law is Stefan's Quintet.


Four of these five stars have similar desplazamientos hacia el rojo, but the fifth is very different, and seems to be
interacting.

The documents of the Particle Data Group cites as the "best modern value" of the Hubble parameter of 72 km/s
per megaparsec (+/- 10%). This value comes from the use of type Ia supernovae (which give relative distances
around 5%), together with data from Cepheid variables collected by the Hubble Space Telescope.

The WMAP mission data leads to a Hubble constant of 71 km/s +/- 5% per
megaparsec.

Hubble parameter

The proportionality between the recession velocity and the distance in the Hubble Ley is called the Hubble
constant, the most appropriately Hubble parameter, since it does not depend on time. In recent years the value
of the Hubble parameter has been considerably refined, and the current value of the WMAP mission is 71 km/s
per
megaparsec.

The recession speeds of distant galaxies are known by the red desplazamiento, but the distances are much more
uncertain. The measurement of distances to nearby galaxies uses the Cepheid variables as the main standard
candela, but for
Machine Translated by Google

Determining the Hubble constant must be examined in the most distant galaxies, since the direct
Cepheid distances lie within the gravitational force range of the local group. The use of the
Hubble space telescope has allowed the detection of Cepheid variables in the Virgo cumulus,
which has contributed to the improvement of the distance scale.

The documents of the Particle Data Group cites as the "best modern value" of the Hubble
parameter of 72 km/s per megaparsec (+/- 10%). This value comes from the use of type Ia
supernovae (which give relative distances around 5%), together with data from Cepheid variables
collected by the Hubble Space Telescope.
The WMAP mission data leads to a Hubble constant of 71 km/s +/- 5% per
megaparsec.

Another approach to the Hubble parameter is the emphasis on the fact that space itself is
expanding, and at a given moment can be described by a dimensionless scale factor R(t). The
Hubble parameter is the ratio of the rate of change of the scale factor with respect to the current
value of the scale factor R:

The scale factor R for a certain object observed in the universe in expansion with respect to R0
= 1 in the present moment, can be deduced from the expression of the parameter z of the red
displacement. The Hubble parameter has the dimensions of the inverse of the time, so you can
obtain a Hubble time tH invert the current value of the Hubble parameter.

You have to be cautious in interpreting this "Hubble time", since the relationship between
the expansion time and the Hubble time is different for it was dominated by radiation and
it was dominated by mass. Expansion time projects can be made using expansion models.
Machine Translated by Google

Hubble Parameter and Desplazamiento al Rojo

The Hubble law establishes that the distance to a given galaxy is proportional to the recession velocity, measured
by the red Doppler displacement. The red shift of spectral lines is commonly expressed in terms of the z
parameter, which is the fractional change in spectral wave length. The Hubble distance is given by

y can be calculated from the wave length displacement of any spectral line.

Division of Energy between the Fotones and the Massive Particles

One of the ideas associated with the modeling of the Big Bang is, that as far back as it is designed in
time, but dominated by the photos is the universe. We think that the present universe is about all matter, but
the energy of the primitive universe was mainly photon energy, with massive particles playing a very small
role.

The amount of radiation energy in the universe today can be estimated using the Stefan-Boltzman law,
considering that the universe is filled with black body radiation at a temperature of 2.7 K. energy in this
equilibrium radiation is given by

There is also a background energy in the neutrinos, which is expected to have a temperature of
approximately 1.9 K, and according to the standard model there is 7/4 of them with a relation to the number of
photons. Treating them as particles without mass would give an energy density of approximately 0.11 MeV/m3,
so the total energy density of photones and neutrinos is approximately:

A current estimation of the mass quantity in the current universe is


Machine Translated by Google

for what the current estimations, puts the amount of energy in massive particles, like more
than a thousand times greater than the energy in radiation.

Temperature and Expansion Time in the Big Bang Standard

In the model of the big bang of the expansion of the universe, the expansion time can be
expressed in terms of the Hubble parameter.

and the Hubble parameter can be related to an expansion model, using the Friedmann
equation.

In the first stages of the expansion of the universe, its energy density
it was dominated by radiation, with the material present only as an insignificant contaminant.
Under these conditions, the density in Friedmann's equation can be taken as associated
with the radiation field, and related to the quotient between the temperature at a given
moment and the current temperature of the cosmic radiation of
background. This from:

The dependence of the temperature of the fourth power, comes from the law of Stefan
Boltzman. Substituting in Friedmann's equation an expression of the expansion of time, as
a function of temperature in a primitive universe dominated by radiation.
Machine Translated by Google

The energy densities of the radiation and the material are approximately equal to the temperature of
the point of transparency, approximately 3000 K. At much lower temperatures, the energy is dominated
by the material. The energy density of the material as a function of temperature is given by

The resulting expression for the expansion time from Friedmann's equation
then

Temperature, Expansion Time and Energy Density in the Universe in Expansion

At the beginnings of a primitive universe dominated by radiation, where T>>3000K, the expansion time
can be related to the temperature in the formula:

It is understood that this calculation only includes photones and neutrinos, and does not apply to times
prior to the annihilation of the greater part of electrons and positrons.
Another factor of 7/4 must be introduced to include the contribution of the energy of electrons and
positrones.

For temperatures T << 3000K, matter predominates over energy. In the era dominated by the material,
the time and the temperature of expansion are related by:
Machine Translated by Google

Taking the energy density of the material as 0.5 GeV/m3, then the expansion time
calculated from there, is approximately 4.5 x 109 years. If the energy density of the
material is taken, as the critical density of approximately 5.5 GeV/m3 associated with a
Hubble parameter of 72 km/s/Mpc, then, the expansion time at the current temperature
of 2, 7K is 13.6 x 109 years. The calculation of critical density comes from an expression
in Weinburg.
Machine Translated by Google

IMPORTANT:

Hyperphysics (© CR Nave, 2010) is a continuously developing base of physics instruction material. It is not
freeware or shareware. You must not copy or duplicate without authorization. The author is open to
proposals for its use for non-profit instructional purposes. The general intention has been to develop a wide-
ranging exploration environment that could be of use to students and teachers.

Selected by the SciLinks program, a service of the National Science Teachers Association.

Copyright 2001.

A number of educational institutions are using the material in classes equipped with computers, and for that,
they can prepare DVD lab packages for their institution. Another possibility is a license to publish Hyperphysics
internally, on an internal site, which will allow you to modify and add Hyperphysics as a base on which to
expand. Such licenses are subject to restriction, that access to the internal mirror site from the World Wide
Web must be protected by a password.
These licenses are being used by a number of educational institutions and training centers to facilitate the
development of specific contents without having to "reinvent the road", reviewing all the introductory material

You might also like