You are on page 1of 36

Telescopes are a technology that uses light to make faraway objects like the moon or the stars

visible to humans worldwide. Telescopes have been around for centuries and have significantly
contributed to astronomy.

Telescopes as Mirror Technology:

There are two types of telescopes. One that uses a Lens called an objective Lens and is a refract-
ing telescope. Refracting telescope uses the process of refraction, which is the bending of light as
it travels from one medium to another. The other is a reflecting telescope containing a primary
mirror and a mirror plane. This type of telescope uses the process of reflection in which light re-
flects off a smooth or plane surface. Both Telescopes use curved surfaces to gather light from the
night sky, which is refracted or reflected at a rate of incidence, and the shape of the secondary
mirror or Lens concentrates light which is what we see when we look into a telescope.

Why are Telescopes considered Mirror Technology?

Although some telescopes use Lenses, they create many problems. The most common issue with
refracting telescopes is the phenomenon of chromatic aberration. A refracting telescope is un-
favourably compared to a reflecting telescope because it is easier to make mirrors smooth and
without impurities that can affect the light-gathering process, which is essential for science tele-
scopes and observatories, which require mirrors as large as a detached house. When mirrors are
built that big, using a mirror rather than a lens is more straightforward and practical, as in the
James Webb Telescope, which contains over 18 primary mirrors.

Mirrors are much easier to work with than glass, and when Lens is used in a telescope, the mir-
rors tend to become heavier and more prominent in size and harder to hold in place, making them
very vulnerable to damage and cracks. The glass will also have to be made much thicker at larger
sizes, impacting the refraction rate and causing the light to travel slower through the medium.
Mirrors will gather more light with size and will not cause as many issues.
Chromatic aberration is a problem for refracting telescopes which contains Lens. Since different
light colours are refracted at different rates, the wavelengths tend to have different focal lengths.
If different colours are found at different locations, the image will become blurred, which could
also cause magnification problems. The final image will result in fringes.

How do Reflecting Telescopes work?

A telescope contains two mirrors: a primary mirror and a secondary mirror. A primary mirror is a
mirror near the end of the telescope, and a secondary mirror is near the eyepiece. Usually, these
mirrors can be either concave, convex or a combination.

The image seen through a telescope works interestingly, which occurs by a concentration of
light. The light will first enter through the primary mirror, which will be convex or concave
shaped. The light will be reflected upwards, preventing the light from being concentrated in front
of the tube opening. That is where the second mirror, a plane mirror inside the top or bottom of
the tube, comes in handy as the light continues. It will become deflected by 90 degrees and enter
the focus area, where it enters the eyepiece and will be concentrated enough to become viewable.

There are many different variations of this design. However, the Newtonian Telescope design is
considered the most common reflector and, on some occasions, regarded as a classic design that
has been modified and adapted to suit various needs.

Discovery of the Technology:

Telescopes have been around for a few centuries. The earliest telescopes have existed since the
late 1600s. In the early days, they were not used for space but for voyages, military tactics and
ground-based distance viewing. In 1608 in Hague, Netherlands, Hans Lippershey was a lens
maker who first invented the design for a device that "aided in seeing faraway things as through
nearby convex and concave Lens in a tube. The original device only had a magnifying capability
of 3 or 4 times.
Galileo Galilei had heard about this innovative design and proceeded to work on his telescope in
the summer of 1609. Galileo created telescopes that would later play essential roles in astron-
omy. His first design had been an instrument with three times viewing power and gradually
would build into a telescope that could magnify up to 9 times. His design was called the Galileo
Optic Tube; the device would use a combination of both convex and concave lenses and be the
first to use it for astronomy. He would go on to reaffirm that the Earth and the planet circled the
sun and go on to discover the stars in a milky way. Later, he published many other discoveries in
his book "The Starry Messenger."

Many designers would go on to create telescopes that would all contribute to science, astronomy
and technology.

The Newtonian Design is essential to mention as it is still a famous telescope used today. In 1668
Isaac Newton created a reflector telescope with a concave primary mirror and a second plane
mirror. Although he is not credited with creating the first telescope to use mirrors rather than
lenses, he was able to create a working model of high quality. This design was much more
straightforward to make and did not create the problem of distortion through a phenomenon
called chromatic aberration. His invention has allowed us to build much larger telescopes with-
out obstruction or distortion, some of which are used in significant space projects today.

Key People:

Hans Lippershey: Born in 1570 in Middleburg, Netherlands. He was a lens maker and has been
credited with inventing the first telescope in 1608. He brought his design to the State General of
the Netherlands, where he went to have it patented. Although he was not awarded the patent be-
cause of how easily the design could be copied, the Dutch Government gave him a hefty sum for
his device to make copies of his instrument.

Jacob Metius: Born in 1571 in Noord Holland, Netherlands. He was a Dutch mathematician who
was a rival to Hans Lippershey. Only a few weeks after Lippershey filed his application for the
telescope, he also submitted his patent for the device, which was later denied because of the sim-
plicity of the design and how easily it was to copy.

Galileo Galilei: In Pisa, Italy, during the year of February 15, 1564. He was an Italian astronomer
and mathematician who made some significant discoveries in the sciences of astronomy. He is
essential because he is credited with creating the Galilean Telescope, with which he was able
first to study outer space. He discovered how the Earth and other planets moved around the sun
and observed and described the moons of Jupiter, the rings of Saturn, the phases of Venus,
sunspots, and the moon's slightly uneven oval shape.

Johannes Kepler: Born on December 17

1571, in Weil Der Stadt in Holy Roman Empire (Modern Germany). He was a mathematician
and astronomer who made significant improvements to the designs of telescopes—three years af-
ter hearing about inventions by Lippershey and Galileo, and he started working on improving the
telescope's design called the Keplerian Telescope in the year 1611. His telescope design used a
convex eyepiece lens that allowed viewers to see a much larger field of view and provided much
larger magnification levels than the Galilean Telescope.

Isaac Newton: Born on December 25, 1642, in Lincolnshire, England. He was a physicist and
mathematician who played a significant role in optics and was widely known for creating the
Newtonian Telescope. He is responsible for creating the first working reflector telescope with
mirrors rather than Lenses. This invention was revolutionary because it solved the problem of
colour dispersion, otherwise known as Chromatic Aberration. This design is still in use today and
allowed for the creation and building of larger telescopes that have made significant processes
and contributions to space exploration, astronomy and the study of physics.

John Hadley: Born April 16 April 16, 1682, in Hertfordshire, England. He was widely known for
successfully improving reflecting telescopes, specifically the Newtonian Telescope. His im-
provements have allowed for stronger magnification with sufficient accuracy and power for as-
tronomy.
Current Uses: The telescope is used by astronomers for observing outer space, and it allows for
viewing celestial objects like the stars, constellations and galaxies, to name a few. It can be used
to study space, or amateurs can use it to view the natural wonders of the sky.

Astrophotography: Telescopes can use by professionals and amateurs for astrophotography.


Which is the process of photographing an object in space Using a telescope with a camera. An
astrophotographer can capture beautiful images of the stars, galaxies and other celestial deep sky
objects. It is possible to take pictures of faraway galaxies and nebulas.

Telescopes in Space: Telescopes can be helpful for astronomy and studying various cosmic ob-
jects. They can help give scientists a deeper understanding of areas of deep space and distant gal-
axies in the realms of the universe. The main advantage is that these telescopes will sit over the
horizons of the Earth and will not be affected by light distorting and other blocking effects
caused by the Earth's atmosphere and surface. The atmosphere contains shifting air pockets cap-
tured as motion blurs on telescopes on the ground, which also causes the appearance of the twin-
kling of the stars.

By exploring these cosmic objects, astronomers and scientist can gain a deeper understanding of
the beginnings of the Earth and answers questions about how the Earth was formed and when.
They can also gain an insight into how certain stars and galaxies first appeared.

Many wavelengths from the electromagnetic spectrum do not reach the Earth because they are
absorbed or reflected by the Earth's atmosphere. Space telescopes can be a great tool to explore
and view visible, ultraviolet and infrared wavelengths which can not reach the Earth due to atmo-
spheric absorption.

Here are a few different types of telescopes that are being used for space study, which can be
used to understand the cosmic universe from the entire spectrum of light, even the sources of in-
visible light, due to a combination of telescopes and other modern technologies that have been
invented along the way.
Hubble Space Telescope: Launched in 1990 and named after astronomer Edwin Hubble it is a
sizeable space-based telescope. It sits on the Earth's orbit and is far away from any issues caused
by an Earth-based telescope, such as obstructions such as rain clouds, light pollution and atmo-
spheric distortion. Images produced by the Hubble Space Telescope can be much more transpar-
ent, brighter, and more detailed. This optical observatory can reach some of the most distant ar-
eas of the universe to give us more information about unknown galaxies and stars.

The telescope is a large reflecting telescope which gathers light from space objects. The light
will enter through the primary mirror and reflect onward through a secondary mirror, but since
this telescope is so much larger than telescopes on Earth and in space, it would not make sense to
have an eyepiece. Instead, it contains two cameras, a faint-object camera, a wide-field planetary
camera, and two spectrographs. The wide-field camera on the telescope can take wide-field and
high-resolution images of celestial objects. It is said that this camera can capture up to 10 times
or more significantly than any earth bases telescope. The faint-object camera can detect objects
50 times fainter than anything an earth-based telescope could view. The spectrographs will break
the light from its single material into parts like a prism does to a rainbow. The faint object spec-
trograph will gather information about an object's chemical composition, while a high-resolution
spectrography can reach distant ultraviolet light that can not reach the Earth's atmosphere.

The Hubble Telescope has made significant contributions to the field of astronomy. Some of its
accomplishments include the discovery of nearly 1500 galaxies, Hydra, Nix and the two moons
of Pluto.

Chandra X-Ray Observatory: Considered one of the world's most powerful X-ray telescopes,
and according to NASA, it has eight times greater resolution and can detect sources more than
twenty times fainter than any previous X-ray telescope.

Launched in 1999, this x-ray observatory was launched by the Space Shuttle and named after
Nobel prize winner Subrahmanyan Chandrasekhar. The Chandra X-Ray Observatory is a tele-
scope that can detect invisible forms of light called X-rays produced in the cosmos. They can not
reach the Earth's atmosphere, so the telescope is placed 133,000 kilometres away from the Earth.
X-rays and other forms of radiation like gamma can be found when space matter is heated mil-
lions of degrees in events such as when stars burn or explode, which can produce sulphur, silicon
and iron and will require X-ray telescopes for viewing. This type of telescope can obtain x-ray
images of celestial objects that can show a side of space that can not be seen by the human eye,
such as events such as massive explosions, black holes and neutron stars. X-Ray telescopes can
add more dimensions to objects that give off visible light.

Chandra X-Ray observatory has an observing power of about half a billion times more than the
first telescope created by Galileo. It has allowed a deeper understanding of black holes, super-
novas and dark matter. It gives scientists a deep understanding of the distribution of radiation and
its role in the habitability of planets.

The Spitzer Space Telescope:

On August 25, 2003, The Spitzer Observatory was launched into space and was used by NASA
to study infrared wavelengths. The telescope was named after Lyman Spitzer, an American theo-
retical physicist. The spacecraft was decommissioned on January 30, 2020. It is in orbit to the
sun and placed at a distance that it would not pick up interfering infrared light from the Earth.
Whereas X-rays can be produced in warm temperatures, cold temperatures are ideal for this ob-
servatory to measure infrared light. The Spitzer Space Telescope collects light emitted from
colder objects and can identify molecules to determine the temperatures of the atmosphere of dif-
ferent planets. Other uses for this type of telescope are to see failed stars called brown dwarfs,
extrasolar planets, giant molecular clouds and organic molecules.

Plans for the Future:

James Webb Space Telescope:

It is considered to be NASA's largest and most powerful telescope. A ten-million-dollar infrared


observatory will pick up where the Hubble Space Telescope left off.
Although it was launched on December 25, 2021, it is still in the early stages of its launch and
has yet to accomplish any significant milestones in its early career.

The James Webb Telescope is NASA's largest and most powerful science telescope. It is a large
infrared telescope with a primary mirror consisting of 18 segments and is around 6.6 meters
wide. This telescope has a larger aperture, diffraction-limited image quality and an infrared sen-
sitivity not available to any space or ground telescope currently in existence. One of the space
crafts objectives is to understand the outer origins of the universe by studying a distant galaxy's
infrared signature to determine its age. This telescope will play an essential role in determining
how stars are formed by studying stages of stellar evolution and examining dense cold cloud
cores where stars are first formed. Other objectives include how planets are formed and if there
is a possibility of life.

Giant Magellan Telescope:

Currently Under Construction in the Chilean Andes at the Las Campanas Observatory, ready for
2029, the Giant Magellan Telescope is set to be the largest optical telescope in the world. It con-
tains seven mirrors and will be 65 meters high; each is set to be 8.4 meters in diameter and will
provide a broad area of 3.691 square feet of the light collection area. Chile has ideal weather
conditions where the sky is dry and clear for most of the year. These weather conditions and eco-
logical advantages will give astronomers the ideal conditions for viewing space objects.

This earth-based telescope will be paired with a spectrograph tool that can take signals and sepa-
rate them into component wavelengths. This tool will allow scientists to analyze light and dis-
cover the properties of materials interacting with it. It will be able to detect visible and invisible
forms of light. Harvard and Smithsonian researchers developed this tool and will be able to mea-
sure the light spectrum at a very high calibre. According to the Center For Astrophysics, the in-
strument will be able to determine the mass of a planet and where liquids and water could be on
a planet's surface. Other features include the ability to detect molecules in the atmosphere of exo-
planets like molecular oxygen.
Along with a powerful astronomical camera, the telescope will be able to flight the issues caused
by the Earth's atmosphere using tools that employ adaptive optics, which are able to compensate
for distortions caused by the air; flexible secondary mirrors have the ability to change shape. As
a result, astronomers can capture clearer and sharper images.

It is set to create ten times clearer images than Hubble Space Telescope. One of this telescope's
objectives is to study and observe the distant universe and look for exciting things, including life
forms. This telescope will be making interesting contributions to astronomy by looking into the
formation of planets and the chemical composition of foreign planets, like the blanketing atmos-
pheres of Venus and Jupiter. With these new possibilities, it will help find a possibility of ex-
traterrestrial lifeforms. Other objectives of this telescope are to look at supernovas, white dwarfs,
and black holes and their effects on the space environment they surround and the origins of gal-
axies that are unknown to us.

Navy Grace Roman Space Telescope:

Set to launch in 2026 by NASA, it is a future infrared space observatory. It is predicted to have a
100 times better paranoiac view, creating the first wide-field maps from space. It is set to be lo-
cated nearly 1.5 million kilometres from our planet. This Wide Field Infrared Survey Telescope
was renamed in honour of NASA's first chief astronomer, Nancy Grace Roman. It is set to study
and understand space topics ranging from the mystery of dark energy to surveying planets be-
yond our solar system. Other objectives include observing stars, black holes and other features of
faraway galaxies.

This telescope is expected to contribute to the findings of The James Webb Telescope and The
Hubble Telescope by using a combination of wide-field imaging and spectroscopy at a high reso-
lution. This will allow scientists to view galaxies over a long interval of comic time, allowing for
a greater understanding of how galaxies are shaped and moulded over time. Through galaxy
studies, it will give astronomers and scientists a deeper understanding of the mysteries of dark
matter. Just as X-ray and infrared spectrography add more layers to the understanding of astron-
omy, wide-view imaging will have similar effects by casting a comprehensive view of space
rather than focusing on one specific object.
Issues With Telescopes:

Although the technology has improved significantly from the early days of refracting telescopes,
telescopes, particularly common telescopes, still have issues that may cause issues with the im-
age.

These issues are still common today. On larger and science, they are often fixed or corrected.
Some issues include:

Spherical Aberration:

Astigmatism:

Coma

Chromatic Aberration

All of these types cause images to come out blurred and distorted in some way or another. The
failure of the light to focus correctly is because of an issue with the angle of incidence or the in-
correct curvature of the convex or concave Lens or mirror.

When first launched, the Hubble Space Telescope ran into a few different issues. The telescope
was producing spherical aberration and produced blurry images. The primary mirror was at the
incorrect curvature. The team of astronauts and scientists created a small mirror that was in-
stalled to correct the light beams entering the primary mirror.

Parts of a Reflector Telescope:


Telescopes come in different shapes and sizes with many different variations, but all have the
same standard fundamental design and parts.

Although there are many different designs, Newtonian Design is the simplest and easiest one that
would be considered essential.

Telescope Tube: This is the outer shell of the telescope and holds the entire instrument together.

Primary Mirror: A convex mirror gathers light from its source.

Secondary Mirror: This plane mirror concentrates the light towards the eyepiece.

Eyepiece lens: Sometimes contains a magnification lens; this is where the image is viewable.

Name of Invention: Camera

How does it work?

Cameras work like a human eye. The main appeal of this lens technology is that it is possible to
see what is seen on a picture on a page, like a drawing, but more realistic and life-like. This de-
vice can control the amount of light that enters a camera by bending and refracting it to a single
sharp focal point using a combination of convex and concave lenses.

Cameras need lenses because the device will only produce white light without them. Every cam-
era has a focal length; these numbers are displayed on the rim of a camera and can be adjusted
according to the specifications. The focal length on the camera is where the light will become the
most focused on an image to centre the subject of the image. The longer their focal length, the
higher the magnification. For example, a focal length of 24 mm less magnification than a focal
length of 200 mm. Since focal length cannot be changed, focusing a camera lens is done by
changing image distance. Moving away makes the object distance smaller, and the image dis-
tance and size will improve.

The light will enter through the front of the camera into the aperture, at which point it will pass
through the Lens and form an actual image. Afterwards, this image will be sent to a film or sen-
sor; in modern cameras, it is easy to adjust these properties to bring the image into focus.

Film cameras work by sending light to a film strip, but in cameras like DSLR cameras or even
Mirrorless Cameras, the Lens will send the light to a digital mirror.

What is Focal Length:

Just like the human eye, objects in the distance will remain in the distance until we walk toward
them. A camera measures the distance between points of convergence of the Lens to the sensor
recording the image. The narrower the angle of view, the larger the number value.

More about the aperture:

Aperture is the concern of a camera regarding its depth of field. It determines the opening and
how big it is. The larger the opening, the shallower the depth of field. It is expressed in f stops,
and the larger the f stop, the smaller the opening. For example, f 2.8 allows more light than f4
and f 11.

Types of Camera Lenses:


Most cameras consist of two types of camera lenses today; many specialty lenses are either
prime or zoom lenses.

Prime Lenses:

These lenses are faster and sharper and produce better image quality. They are very portable be-
cause of how lightweight and flexible they are. The downside is that there is a fixed focal length,
and no zoom allows for a focus on other features.

Zoom Lenses:

These camera lenses are more versatile and allow different focal lengths from a single lens. Al-
though flexible, they take longer to produce photographs and tend to be bigger and heavier be-
cause of how much glass is contained within the lenses. These are best suited for professionals
and people passionate about photography who do not need to move from place to place.

Variety Of Lenses:

Macro Lenses:

These lenses are great for close-up photographs and can produce sharp images at a closer range
by allowing the camera to capture more details of smaller objects like bugs, birds and flowers, to
name a few. This type of Lens is prevalent in nature photography.

Telephoto Lenses:

These lenses are considered to be zoom lenses and have multiple focal points that can produce a
narrower point of view. It could isolate a subject that was located far away. Telephoto Lens can
focus on distant objects without being up close to the object. Although this type of Lens is heavy
and expensive, it is still famous for sports and wildlife photography.

Wide Angle Lenses:

This type of Lens can fit a large area into the frame. Everything stays in focus unless the subject
is too close to the Lens. It is famous for landscape, street and architecture photography.

Standard Lenses:

Standard lenses have a focal length between 35mm and 85mm. This Lens can be used as a gen-
eral-purpose lens for various photo projects, and standard lenses are great for beginners.

Parts of a Basic Camera:

Convex Lenses:

Convex lenses can form real and inverted images on a film or sensor. The image is produced
when an object's distance exceeds the focal length.

Real image:

The image exists and can be recorded on film, and the image will become inverted by the Lens.

Virtual Image:

It is a false image that can be seen but not recorded on film.


Diaphragm of F-Stop:

Controls the amount of light that will affect the final image. The smaller the f stop, the lighter the
image and the speed of the Lens. The size of the Lens is called the f stop and is the ratio of the
focal length of the diameter of the opening to the Lens. This ratio is expressed in f/d as seen on
an ordinary camera.

Shutter Speed:

The length of the open shutter affects the amount of light that affects the film. Faster Shutter
speeds are needed when the amount of light is small, and it can also reduce blur in images. A
camera is expressed in seconds; for example, if the opening time for a shutter is smaller than the
faster shutter speed, it will result in less light hitting the film.

Film:

Although the film is no longer used, modern cameras are recorded with sensors and electroni-
cally. The film was used to record the actual image formed by the camera lens, and these images
were later processed to produce pictures viewable on photo paper.

Film cameras are called analog cameras, and electronic cameras are called digital ones.

Discovery of the Technology:


The earliest forms of a camera and what started the photography revolution started in 391 bc by
Han Chinese scholar Mozi. Although it is unclear who was the actual inventor of this object
called the Camera Obscura. The name means dark room or dark chamber in Latin. The concept
behind this device is a little box with light entering through a small hole, and the adjacent wall
would cast the image.

There are records of Aristotle, greek architects all using the camera obscura for light research
and some practical application.

It was not till the 11 century when an Arab physicist named Ibn Al-Haytham published a book
about optics and wrote extensively about a device with light entering through a tiny hole in a
darkened chamber.

There are records of Leonardo da Vinci in his book Codex Atlanticus containing thorough infor-
mation and explanations on this device and how it works with detailed diagrams.

This ancient technique has existed for centuries with men of prominent status and knowledge
who have used, studied and examined this device for light projects.

However, it was not until 1816 until Joseph Niecephore Niepce and his Brother Claude were de-
termined to work on improving the Camera Obscura.

In 1826, the first working prototype was used to take the first photo ever. Niepce took a picture
of his home in Le Gras in France called "View from the Window." He positioned a silver chlo-
ride paper at the back of the camera obscura to create this image. These images were called reti-
nas, and there were many issues with this image, as it was not permanent; it took eight hours to
produce and started to blacken in daylight.
This French camera innovator worked on various chemical solutions to create a permeant image.
He worked on creating a positive image by using compounds that bleached by light rather than
blacked them, and he used a chemical solution containing salt, manganese black iron oxide.

Over the decade, he tried a variety of different chemicals to invent the process of what he called
heliography or sun writing. Eventually, he created a solution using dissolved light-sensitive bitu-
men, which was used for film, sometimes known as the "asphalt of Syria." This solution was a
semi-oil form of oil that appeared like tar mixed with pewter, then it was placed in oil lavender
and applied a thin coating over a polished pewter plate. Although Niepce was able to create the
first permeant image, further improvements were still required.

In 1829 Nieceephore Niepce partnered with a French painter called Louis Jacques-Mande
Deguerre. At the time, she was well known for creating a theatre scene known as a diorama. Da-
guerre used the camera obscura and improved the Lens during the production of his art project.

The duo shared the goal of reducing long exposure times and improving the camera obscura.

Niece died in 1833, but Daguerre continued improving well after his death. Two years later, the
Daguerreotype was created. This type of camera used a chemical process to produce an image by
using a container of iodine particles forming silver iodide on the surface, sensitizing a silver-
plated sheet of copper. Then the plate was exposed in a camera, and then the exposed plate was
placed over a container of warm mercury. The solution produced a chemical reaction that pro-
duced an image. Finally, the plate was washed to prevent further exposure.

In 1839 a daguerreotype camera, the first image, was recorded. In that same year, the invention
was bought by the French Government to be shared with the public. Daguerre received 6000
francs yearly, and Niepic's son Isidore received 400 francs yearly until they died in the late
1880s.

Shortly after, the product became available for wealthy households in France.
Different Types of Cameras:

Calotypes:

Calotypes were an early type of camera invented by Henry Fox Talbot in 1830. The main benefit
was that it took less time to produce images. The process involved writing paper, soaking it in ta-
ble salt, and brushing it lightly with silver nitrate, creating an early film. Once the light hit the
film, it created a chemical reaction resulting in image capture. The paper could be waxed to save
the image. Although, the images came out more blurred and produced negative images.

The Mirror Camera:

Created by Alexander S Walcott, these cameras created images with

Film Camera:

The Flim Camera was the first breakthrough moment for modern photography. In 1888 an Amer-
ican entrepreneur named George Eastman created the first proper film camera called the Kodak.
This camera consisted of a single roll of paper and gradually moved towards using celluloid.
These films captured the negative image like a calotype but were much sharper, like a Da-
guerreotype. With innovation and invention, these only took a few seconds to develop. In order
to create a final image, the film would need to remain in a dark box and had to be sent back to
Eastman Company for image processing. The first Kodak could hold 100 pictures.
The Kodak Brownie was the first camera available to the middle class. It was released in 1900
and was an American box camera that was simple to use and came in high quality with a price
point of only 25 dollars. The release of this camera helped popularize the use of cameras for
birthdays, vacations and family gatherings.

35mm Flim Camera:

Called a 35mm or 135mm and was released in 1934 by the Kodak Flim Company. The name
comes from the film being 35mm, and the frame was a height of 24mm for a 1:1:5 ratio and used
a cassette or roll.

The film was placed in a cassette container shielding it from light. The user would need to place
it into the camera and wind it into a device spool. The film would be rewound into the cassette as
each photo is taken. Once the roll was done, it would be placed back into the cassette. Each cas-
sette contained 135 Flims with 36 exposures.

The Leica:

Invented In 1913 was the first camera with collapsible and detachable lenses. It was created by a
German engineer named Ernst Leitz, a director at the Optical Institute. By trade, he was a trained
watchmaker, but in 1930, he worked with Oskar Barack to create the perfect camera. That year,
he released a camera named Leica One, which had a screw thread attachment to change between
three lenses.

The Leica Two contained a range finder and an operated viewfinder. In about 1932, the Leica
Three was released and had a shutter speed of 1/100th. These cameras were considered to be in-
novative and revolutionary for their time. These cameras remained popular until the 1950s, when
more manufacturers entered the mass markets.

The Movie Camera:

In 1882 inventor Etienne-Jules Mary created the first cinematography camera, which in 1891
would be the beginning of the movie industry. It was initially called the chronophotography gun,
and it produced 12 images a second and exposed them on a single curved plate. These early cam-
eras ran around 20 to 40 frames per second. Today, cameras can record tens to thousands of
frames per second.

First Single-Lens Reflex Camera (SLR):

In 1861 inventor Thomas Sutton created a camera based on the camera obscura that used a reflex
mirror that would allow users to look through camera lenses to see the exact image being
recorded on film. The user would view a slightly different image than what was recorded on the
plate for the film. This technology could have been better in the early 19th century and would
only become useful in the 1970s and 1980s. The early SLR Cameras were mainly used for pro-
fessional photographers and special interest groups because, at that time, the price was not as
reasonable as cameras available for the mass market.

First Auto-Focus Camera:

This camera was invented in 1978, and the main benefit was that it contained a lens that could be
moved to accommodate the distance between the device and the subject. These were the basis for
early rangefinders because they could manipulate distance from the subject and camera. The
camera contained Lens and small motors to automate the process. At the time, autofocus was
only available for high-end SLR cameras.

Colour Photography:

IN 1961, Thomas Sutton created an image capture device that combined red, green and blue to
create any visible colour. Cameras previously would use three monochrome plates that would
only produce images in black and white or in one colour like white or grey. In 1935 colour pho-
tography became available when the Kodak Company invented the Kodachrome Flim. The new
camera used different emissions layered on the same film to record its colour. It was considered
very expensive at its first release and was mainly used by photography professionals. Today, the
same three-colour system is used to record colour.
The three-colour system would also be the basis for how Polaroid cameras operate. In 1871,
Richard Leach Maddox created the first Polaroid camera, which used gelatine dry planets to cre-
ate photographs of instant exposure.

Although modern Polaroid would not be available for colour photography until 1948, a man
named Edwin Land and his company, the Polaroid Corporation, released an instant exposure
camera under the name Polaroid which would be iconically known as we know it today. The
camera was an instant camera that could produce a photograph within the device, unlike tradi-
tional cameras requiring the film to be developed later. The film had the negative taped to the
positive with the film of processing material, and the user would have to peel the two pieces and
discard the negative. Later versions of this camera would do this automatically and only eject the
positive. The film was three inches and contained a square border. It was trendy in the 1970s and
80 and has seen a cultural revival in recent years as part of retro culture.

The Digital Camera:

The original concept was theorized by scientists in 1961. However, it would not come to fruition
until 1975 when a Kodak Engineer named Steven Sasso created a device that would capture
black and white images onto a cassette tape rather than film. This early device would require a
screen to look at images rather than be able to print them. It had a resolution of 0.01 megapixels
(100 x 100). The first digital cameras required 23 seconds of exposure to record an image. The
technology used a device that would use electrode technology that would change voltage when
exposed to light.

This camera would become available to the mass market at an affordable price in the 1990s when
Logitech released a device called The Decal Model 1. Specifications for this device were that it
could record data onto internal memory and had 1 megabyte of ram. The camera required users
to connect the camera to a computer and at which point the image could be downloaded for
viewing or printing.

Digital Single Lens Reflex Camera (DSLR)


The first Camera Photo:

This first camera photo was released in 1999 when it was included with the Kyocera VP-210. At
the time, this camera was less useful for daily applications than a camera of its time could do,
and the feature was considered more of a gimmick than a tool like its today. It had a 110,000-
pixel camera, and the images could only be viewed on a 2-inch colour screen.

The camera photo became a revolutionary tool once the Apple company created a series of cam-
era phones. In 2007 founder Steve Jobs created the iPhone 1 or iPhone 2G. It was released on
January 9, 2007. The digital phone had a 3.5-inch screen and contained a 2-megapixel camera. It
can come in two options: the 4G and the 8G models. Once this phone was released, digital pho-
tography was popularized, and images could be comparable to digital cameras. They were also
credited with the invention of sensing and receiving images via cellular networks, a feature that
is so commonly used today. Today, the iPhone 13 has multiple lenses with a video camera, and
the camera contains up to a 12-megapixel resolution.

Key People:

Joseph Niecephore Niepce

Louis Jacques-Mande Deguerre

Henry Fox Talbot

George Eastman

Ernst Leitz
Etienne-Jules Mary

Thomas Sutton

Richard Leach Maddox

Edwin land

Steven Sasson

Steve Jobs

Current Uses:

Photography:

Today the camera is widely available at many different price points and is used for photography
by professionals and amateurs alike. Photography creates images for commercial or artistic pur-
suits often used in fashion, wedding wildlife and landscape photography. Cameras can also be
used to preserve memories and tell stories. Every time a family photo is shared on Facebook,
credit can be given to the original camera.

Videography:
Videography is capturing and creating moving images in the form of videos. Often, the content is
used for film, television and adverting. Other uses include corporate videos, weddings and other
private events. When a television show is watched or a YouTube video, it is mainly created with
a video camera. Today, many videos are uploaded to YouTube as a form of content. This new in-
fluencer social media marketing industry has a significant influence because of how accessible
the modern camera has become.

Wildlife:

Wildlife photography is documenting animals in a natural habitat using a camera or a video cam-
era, sometimes both. This process is usually done without disturbing the natural behaviour of the
wildlife animals. The objective is to capture and document animals in movement or action. This
type of observation is helpful for scientists to document animal behaviour and migration patterns.
The results sometimes lead to the documentation of certain species of animals that are endan-
gered or close to extinction. Wildlife photography can help raise awareness and incentives to
take action to create habitat protection and create solutions for artificial disasters.

American Photographer James Balog created a documentary called Chasing Ice which he used
cameras to take photographs of glaciers. The documentary helped show the ecological and envi-
ronmental effects of human impact or ecosystem interference over periods.

Wildlife photography can show the changes taking place over a long period and create awareness
of events and natural disasters in inaccessible places.

Cameras in Space:

As the Hubble Telescope contained a camera that was used alongside the telescope, it helped
record and study the cosmos. The camera contained a competent called a charged coupled device
(CCD). They use a tiny microchip rather than film or photographic plates to capture photographs.
The microchip is similar to a sensitive detector or protons that uses the light collected by the tele-
scope. It consists of a large grid of individual light with light-sensing elements called pixels that
convert light patterns into numbers. The highest numbers are the brightest parts of the scene, and
zero represents darkness.

One of the primary cameras on the Hubble Telescope is called The Advance Camera for Survey-
ing Space (ACS). This camera can detect ultraviolet and near-infrared light. The camera was
mainly designed for wide-field imagery, and it can capture images with visible and infrared light,
like visible light images and ultraviolet images. Both the telescope and the camera on the Hubble
use a spectrograph to break down light components like a prism. The spectrograph and the cam-
era work together to form images on a wide field of wavelengths.

Medical Cameras:

The endoscopic camera is a type of camera that is used for surgery. Similar to space cameras, it
is sensitive to visible and infrared spectra. The optical image is transferred from the body cavity
being examined to the camera head. It uses a ring and flexible scope attached to the camera head.
This camera gives doctors the ability to perform minimally invasive surgeries. Rather than cut-
ting the patient up, the doctor can make a small insertion to illuminate the internal cavity that
needs to be observed. The light admitted from the camera is transmitted into the human body
through the transmission bean called an optic fibre.

Plans for the Future:

Infrared Camera:

There will be innovative and modern cameras that will not only be able to capture images in visi-
ble light on the full spectrum of colour, but now with the invention of modern space cameras,
they will be able to capture images on the infrared spectrum and will be the infrared camera that
will be focused on ignoring visible light and seeking out electromagnetic radiation to see the
temperature and the effects they have on our celestial ecosystem. It will use some sensors and
thermal detectors to determine the level of infrared light, and the sensors will convert infrared
signals into electrical currents.

3D camera:

It was invented by Chris Condon and his company Stereo Vision. The camera is based on Stereo-
scopic imaging, enabling depth prediction in images in three dimensions. He originally invented
3D camera lenses, and with the support of his team, he earned the patent for 3D motion picture
lenses. Early 3D cameras created 3D movie experiences using a system that sends light through
an enclosed box and a unique lens to record an image on a light-sensitive medium.

The device works similarly to how eyes work; it can see images in multiple dimensions of
height, width and depth through multiple angles and perspectives.

This product is helpful to product designers because it will allow them to design products in 3D
and provide a better virtual sales experience. Customers will be able to view images of the prod-
uct from all angles and dimensions to situate an in-person sales experience.

It will also be helpful for real estate photos because it will make it easier to view homes virtually
without viewing individual properties in person.

Name of Invention: Optic Fibres

Discovery Of the Technology:


In the 1790s, in France, the name Chappe Brothers invented a device called the optical telegraph.
It was made up of a series of lights mounted on towers. When operators use it, they pass the mes-
sages back and forth using light. There would be many advancements over the next several cen-
turies before we came to modern optic fibre.

In 1854 a British physicist John Tyndall proved to the Royal Society of Britain by showing that
light could travel through a curved stream of water and that light signal could be bent. He set up
a water tank with a pipe that ran out on one side. When the water flowed from the pipe and ex-
ited on one side, the physicist would shine a light on the flowing water in the tank containing the
stream of water. Once the water fell out on one side, the light arc would follow the water down.
These were the first steps in understanding how light travels.

In 1880 inventor Alexander Graham Bell patented an optical telephone system called the photo-
phone. The device worked as the focused sunlight with a mirror and then talked into a mecha-
nism that vibrated in the mirror. At the receiving end, a detector picks up the vibrating beam and
decodes it back into a voice like a phone does with an electrical signal.

Although, this would eventually come back as optic fibre technology, which would be how cur-
rent phones and most modern communication. Telephone technology was more realistic and use-
able at the time. Some problems would be light interference during a cloudy day which could in-
terfere with a photophone.

In 1895 a French Engineer, Henry Saint Rene, designed a system made of glass roods for build-
ing light. The idea was initially used as an early attempt for television which would prove unsuc-
cessful.

In 1920 a British inventor named John Logie Baird and American Clarence W Hansell obtained
a patent. The patent used arrays of transparent rods to transmit images for televisions and a de-
vice called facsimiles.
In the 1930s, scientist and inventor Heinrich Lamm was the first to transfer an image through a
bundle of optical fibres. He assembled a bundle of optical fibres to carry an image. This device
was initially invented to look at inaccessible parts of the body.

This device was an important discovery, and Lamm would experience a few issues. He was a jew
during the rise of nazi Germany and fled to America. The image produced was of a light blub fil-
ament and was of poor quality. He was eventually denied a patent because a similar product ex-
isted then.

In 1951 Holger Moeller applied for a Danish patent on fibre optic imaging. The device comprises
glass or plastic fibres with a low, transparent index material, and it was designed to reduce signal
interference or crosstalk between fibres. He also did not receive a patent because of the similarity
to the design by John Logie Baird.

In 1954 using the early work of John Tyndall, a UK-based Physicist, Narinder Sigh Kapany in-
vented the first actual fibre optical cable. He was responsible for coining fibre optics and would
contribute to developing this field through his teachings and books, including the one he had
published in the 1960s.

Charles Kuen Kao, in the 1960s, is considered the father of optical communication and, in 2009,
won a Nobel Peace Prize for his work. He was able to discover through his research the physical
properties of glass, and he proved that glass fibres were suitable for a conductor of information.
Although this process was only possible by purifying the glass over thin fibres, they could carry
vast amounts of information over long distances with minimal signal loss. Dr. Koas' work would
be the groundwork for high-speed data communication and the basis for how optic fibres for
broadcast radio and television work.
In the 1970s, a group of researchers, Robert Maurer, Donald Keck and Peter Schultz, invented
the patent for fibre optic wire called Optical Waveguide Fiber. The main benefit was that it could
carry 65,000 times more information than a copper wire. They use silica, a material with a high
melting point and a low refractive index. The information could be carried by the pattern of light
waves to be decoded at the destination, even thousands of miles away.

In 1973 scientists at the Bell Lab created a process to create and form ultra-transparent glass that
can be mass-produced into a low-loss optical fibre, which is the current standard and quality of
fibre optic cable. In the 1970s and early 1980s, telephone companies began using fibre to rebuild
communication infrastructure.

In the 1980s, Sprint's communication telephone company was the first to use a 100 percent digi-
tal fibre optic network that would operate nationwide.

The Erbium-Doped Fiber Amplifier was invented in 1986 by David Payne of the University of
Southhampton and Emmanuel Besurvire at Bell Laboratories. This invention reduced the cost of
long-distance fibre systems by using optimized laser amplification technology, and EDFA also
eliminated the need for optical electric repeaters. By 1988 the first transatlantic telephone cable
companies started using this new technology.

In 1991 Optical Fibre Systems contained a built-in optical amplifier called Photonic Crystal
Fiber. These all-optical systems could carry 100 times more information than cable with elec-
tronic amplifiers vs traditional copper cables with amplifiers. These systems improved perfor-
mance by guiding light by diffraction from a periodic structure rather than a total internal reflec-
tion, allowing power to be carried more efficiently than older cables.

1997 infrastructure for the next generation of Internet applications was built. It was named Fiber
Optic Link Around the Globe (FLAG), the world's most extended single cable network.
By 2000, 80% of the world's long-distance traffic was carried over optical fibre cables, with over
25 million km of cables.

How does it work?

Fibre optic cables are similar to long thin strands of glass and have the diameter of a human hair.
The strands are arranged in bundles called fibre optic cables, and they can transmit light signals
over long distances.

The light travels down a fine optical cable by bouncing off the walls repeatedly.

The light particles bounce down the pipe with an internal mirror-like reflection. It can be imag-
ined to be similar to a beam of light moving down in a clear glass pipe, and it hits a shallow an-
gle with less than 42 degrees, and it reflects into the pipe again. This process is called total inter-
nal reflection, which keeps the light inside the pipe.

The Fiber optical System Consists of a Transmitter, an optical fibre and a receiver.

First electrical data input enters data into the finer optic system. Then the transmitter accepts and
converts inputs electrical signals to optical light signals and sends them by modulating a light
source output by lead or laser.

Inside an Optical Fiber:


A thin strand of glass the same size as glass which acts like a transmission medium. The light
travels through the fibre's core from one end to another by internal reflection, and the light signal
bounces down the fibre's core.

through a series of reflections to the other end of the pipe

The receiver is the light-to-electrical converter at the end of the glass strands. Optical Signals are
received by photodiodes which convert the optical light signal back into a digital electrical sig-
nal. The electrical data outputs results, which can be translated and processed by a router or net-
work switch.

The cable is made up of two separate parts:

The middle part is called the core of the cable and the glass structure. The other layer of glass is
wrapped around the core, which is called cladding. The light signals inside the core are kept
within the core by the cladding.

Two Types of Fiber Optic Cables:

Single Mode Fiber:

The simplest structure has a weak core, around 5 to 10 microns in diameter. The light travels
down the middle without bouncing off the edge. The small size prevents light from bouncing off
the cladding even when the fibre is bent or curved. It is mainly used for long-distance data trans-
fer because of the minimal signal loss and the lack of interference from adjacent modes. It is
used for Internet and telephone applications. The signal is carried by a single-mode fibre
wrapped into a bundle and can send information over 100 KM. This type of fibre is excellent for
long distances because there are no dispersions (spreading out light), and it experiences lower at-
tenuation (loss of optical power).
Multimode Fiber:

Ten times larger than single-mode fibre. This type of fibre is when light beams travel through a
core and follow various paths; therefore, this is great for sending data over short distances in ap-
plications like linking or interconnecting computer networks. Multimode Fibers can carry more
than one frequency of light simultaneously. Because there is a higher signal loss than multimode
fibre, it is often used for communication over short distances because of less bandwidth-intensive
applications. It is most commonly used in local area networks like buildings, corporate networks
or school campuses.

Gastroscope:

It is mainly used in endoscope technology and is a thicker type of fibre. Endoscopes are a medi-
cal tool used to check for illnesses inside the stomach by going down the throat. The thick fibre
optic cable consists of many optical fibres. On one end, there is an eyepiece and a lamp. The
lamp shines its light down now part of the cable into the patient's stomach, and once the light
reaches that area, the light will reflect off the stomach and into a lens at the bottom of the device.
Afterwards, the light travels back up another part of the cable into the doctor's eyepiece.

There is also an industrial version of this machine, and it is called a fiberscope. It is used to ex-
amine things in inaccessible places in machinery in airplane engines and similar machines.

Key People:

John Tyndall:

He was an Irish Experimental physicist born in County Carlow in Ireland, on August 2, 1820. He
was notable for the first practical demonstration of light propagation through a water tube via in-
ternal reflections, and he would call this device the light pipe. This early work would be the basis
for modern light technology and optic fibre communications many years later.

Narinder Sing Kapany:

Born in Punjab, India, in 1952, he was a UK-based physicist who would coin the term for optic
fibre. He would be the first to invent a fibre optic device by designing and manufacturing a glass
wire cable for transporting light based on the early demonstrations by John Tyndall. He has made
many significant contributions to optic technology through his books and teachings, specifically
the book he published in 1967, Optical Fibre Principles and Applications. Many supporters con-
sider him the father of optical fibre technology, but he failed to produce a practical model and
application.

Charles Kuen Kao:

He was born on November 4, 1933, in Shanghai, China. He would later study and work in the
Uk, where he prose that fibres made of ultra-pure glass could transmit light for distances of high
KM without total signal loss in the 1970s. Consider the father of optic fibre technology, who in
2009 received the Nobel peace prize for discovering how light can be transmitted through fibre
optic cables shared with two other scientists. Kao's discoveries and inventions as had a signifi-
cant impact on the way modern fibre optics operate. Most of today's internet, television and tele-
phone rely on fibre optic technology, which can be traced back to Charles Kuen Kao's work.

Current Uses:

Before fibre optic technology, copper cables were used. Fibre optics have the advantage because
they are lighter, less heavy, flexible and can carry high amounts of data at very high speeds.
There is less attenuation and less signal loss. Other benefits include that information travels ten
times further before it needs electronic magnification or amplification. Compared to traditional
Copper Wires, optical cables are much easier and more cost-effective.
Because of Fiber Optic Tehconoly, we now have this new invention called cloud computing.
Cloud computing is where people store and process their data remotely because internet speeds
are almost 5 to ten times faster than traditional DSL broadband networks.

Faster connections have also allowed for streaming movies, online games and made the internet
much more accessible than in previous years. The world is much more connected virtually than it
has ever been before. It is easy to share information, pictures and videos of events going on
around the world issues like war and famine can be alerted to the globe within seconds through
the power of the internet.

Telephones also used fibre optic technology in 1980, ten years after the technology was born.
Many telecommunications companies started replacing their old infrastructure with new optic fi-
bre technology.

In previous years for television and radio broadcasts, the networks would operate by shooting
electromagnetic waves through the air from a single transmitter at the broadcast stations to thou-
sands of antennas. To obtain more channels, a user must have coaxial bales installed. These ca-
bles consisted of material similar to Cooper cables but had metal screening to prevent crosstalk
interference; the issue was that the operation would require more cables to obtain more channels.
Today most of these cables have been replaced with modern fibre optic cables; one cable can
carry enough data for serval hundred TV channels at once. They also have less interference and
better signal, picture, and sound quality. The optic fibre cables require less amplification to boost
signals and can travel long distances.

Optical fibres are also used in medical applications. For example, an endoscope uses this type of
tech to help doctors peer inside a patient's body without cutting them open through a treatment
called endoscopy. To view and observe the upper digestive tract, a flexible gastroscope tube is
placed into the mouth and travels down until it reaches the small intestine passing along the
esophagus and stomach. The other end of the gastroscope tube is connected to a light and a video
camera which images can view by a doctor see a doctor. It can be used to monitor symptoms of
indigestion, nausea or difficulty swallowing.

Two main cables on the endoscopes carry light from a bright lamp to the body, shining a light on
the area of the body where the endoscope has been inserted. Afterwards, the light bounces along
the walls of the cable into the patient's body. Next, the light illuminated the injured part of the
patient's body. Afterwards, the light reflected off the body travels back up the optic fibre cable,
bouncing off the glass wall. Finally, their light is captured on the CCD device that will capture
images like a camera and then display them on a camera.

Plans for the future:

Lab-in-a-fiber:

Lab-in-a-fiber is a new device that inserts a thin hair of fibre optic cable with a built-in sensor
into a patient's body. It has the same composition as a communication cable but is much thinner.
The optic cable sensor works by using light to zap through the body from a lamp or a laser to the
part of the body that the doctor wants to study. The light must pass through the fibre, and the pa-
tient's body will alter its properties, like light intensity or wavelength. Afterwards, the doctor can
measure how light changes using interferometry techniques. This technique can measure temper-
ature, blood pressure, Cell PH or the presence of the medicine in the bloodstream.

Pacific Light Cable Network:

This network is the first direct submarine cable using optic fibres built and owned by Google,
Meta and Pacific Light Data Communications, incorporated in 2015. The network was designed
to connect and improve internet speeds in Hong Kong, Taiwan, the Philippines and the US. The
cables span a distance of approximately 13000 km, and the shortest root is between Hong Kong
and Los Angeles.
San Franciso:

San Franciso is looking to build a city-wide network using fibre optic technology designed to
help improve Internet access for residents who do not have high-speed internet at home. Resi-
dents need help with applying for jobs, obtaining education and educating children. One in seven
public schools in San Fransico needs a computer with a high-speed Internet connection. The city
aims to spend around 1.5 to 1.9 billion to improve public services. San Franciso believes that a
strong internet connection can help in industries like healthcare, education and energy usage,
leading to job booms and local economic growth.

Facebook Simba:

The Meta company is looking to build an all-optical network using new tech that allows for data
transmission at high speeds without electrical processing. The project will be named after the
lion king character Simba the Lion. This project is meant to be built in Africa to strengthen links
in the market where the use of Facebook and Whatsapp is daily and hopes to drive down the
bandwidth cost. The cables will be placed around the shores of each nation involved and will
link up at the beachhead of several countries. The goal is to build a dedicated and reliable link in
regions with poor internet access; there are still 3.8 billion people worldwide without internet ac-
cess. ';

You might also like