You are on page 1of 44

Vacum Tube

The vacuum tube is a closed transparent assembly made of evacuated


glass.
Crookes Tube
In the early 1870s, British physicist William Crooks connected the
anode (+ terminal) and cathode (- terminal) electrodes to the ends of
the vacuum tube and passed high voltage over it. Its purpose is to
measure the conductivity of air. During his conductivity experiments,
he observed movement, which he thought was a ray from the cathode
to the anode. But these rays were deflected in the electric field. (Light
does not deviate in an electric field.)
Cathode Ray

A straight light is observed from


the cathode to the anode (from -
to +), this is called the cathode
ray. The cathode ray is
essentially negatively charged
electrons flowing linearly. They
do not depend on the type of
electrode metal or gas (iron,
platinum, etc.) in the tube.
Contributions
As a result of his studies, in 1886 J.J. Thomson discovered that the
observed rays are the electrons in the structure of the atom and
revealed the atomic model, known as the raisin cake model.

In 1895, Wilhelm Röntgen discovered X-rays using the Crookes tube.


The tube television (CRT display) invented in 1924 also works on the
principle of cathode rays.
After World War I, CRT development accelerated. It was during the
1920s that Allen B. DuMont founded DuMont Laboratories and
produced the first cathode ray oscilloscope. At the same time, AT&T,
RCA and Westinghouse were doing original research on the television.
However, in 1926, Kenjiro Takayamagi — one of the founders of JVC of
Japan — succeeded in getting the first flickering images to appear on a
CRT screen. He pioneered and built the world's first electronic black
and white (B&W) TV. Within a few years, DuMont and Zenith produced
the first commercial electronic B&W television sets.
World War II interrupted the development of television and resources of
the industrialized world were put toward the production of such devices
as radar. During the war, experiments with colour television began, and
in 1948, CBS announced the development of a colour system. The
National Television Standards of Colour (NTSC) was adopted in 1953
— a standardization of how colour television signals were to be
transmitted through the current B&W system. The principle of the colour
system became the main component for the emerging computer
monitor.
The inside of the tube is a vacuum environment, so there is no
air. With the heating of the cathode filament, the electrons are
able to move freely in the vacuum, and due to the voltage
difference with the anode screen surface, the electrons are
ejected by focusing them in a beam of light towards the screen.
Electrons hitting the phosphor layer on the screen surface shine
and illuminate the pixels. Color is created for each point of the
screen by means of horizontal and vertical deflection coils
around this created beam.
Beams of light are emitted from the electron gun in three primary
colors. These are the colors Red, Green and Blue, known as "RGB
Colors". All intermediate colors in nature can be produced by mixing
these colors. A 100% mixture of these colors creates white, and no
light is sent, that is, black in darkness. All other secondary colors are
obtained by mixing these primary colors in different proportions. As
the beam is sent to the phosphor layer on the screen, they pass
through a perforated shadow mask. This mask ensures that the
beam hits only the places where its color is desired. Each pixel on the
screen is divided into three sub-pixels, and a beam of electrons
passing through the shadow mask with very fine adjustment
illuminates the sub-pixels individually. As a result, the main pixel
reflects the color formed by the combination of the sub-pixels and
that color appears on the television screen. Since this event is done
at a very high speed and thousands of times per second, the
received TV signal enables real-time image formation on the screen.
Who manufactured the first TV ?

In 1926, Kenjiro Takayamagi — one of the founders of JVC of


Japan — succeeded in getting the first flickering images to
appear on a CRT screen. He pioneered and built the world's first
electronic black and white (B&W) TV
During the war, experiments with colour television began, and in
1948, CBS (USA) announced the development of a colour TV
system.
THE RISE OF THE COMPUTER

The emergence of the current day colour monitor followed


in stride with the appearance of the computer, increased
applications for use, and the ultimate need for a high-
resolution display. As color TV broadcasting became more
prevalent by the mid-1950s, so too came the need for the
monitor as a tool to aid in post production, colour editing
and image evaluation. As this monitor connected directly
into the NTSC system, its image quality was adequate only
for the television industry.
The advent of the microcomputer (or PC)
beginning in the late 1970s became the
platform for the development of the graphic
card. The first graphic cards were add-ons to
the DOS-based microcomputer and were
required for the running of application
software used to produce graphics (i.e.,
VersaCAD, AutoCAD). Resolution and colour
density evolved gradually from 160 x 200 at 4
colours (CGA) to 800 x 600 at 256 colours
(VGA), respectively. The change in the
monitor's screen size was also part of this
evolutionary change.
When the first Graphic Card manufactured?
The first graphic cards were add-ons to the DOS-based
microcomputer and were required for the running of application
software used to produce graphics in 1070s (i.e., VersaCAD,
AutoCAD).
Use of Computer in Design
• Use of AutoCAD as a standard in technical drawing was a
dramatic change in Design and Manufactuing.
• It is still the basic software in design
• The connecion between design and MRP was also satisfied via
AutoCAD
• This completely depends on Graphic Card and Monitor
Change in Computer Aided Design From 1970 to 2020
In 1970 2 dimensional technical drawing was achieved by using a
graphic card. Display was on Monochrome Monitor. Mouse was used as
an input device.
Now a days Virtual Reality is used bu many CAD system, autoDesk,
Solidworks, Catia, Onshape
THE CRT'S DECLINE

From here on in, the technology surrounding the CRT remained virtually the same,
except for improvements in reducing the geometric curvature of the tube face, the
width of the CRT and improvements in the electron gun structure. Additionally,
manufacturers improved the electronics within the monitor to produce better
specifications, including the design of more effective multi-frequency monitors.

In the late 1980s, it was forecasted that the CRT would become obsolete by the
1990s as LCD technology had started arriving on the scene in the 1980s. However,
this was clearly not the case as the CRT monitor continued to resign supreme well
into the 2000s. Today, the demand for CRT screens has fallen so rapidly that they
have, for the most part, disappeared from the scene. Hitachi, in 2001, halted
production of CRTs at its factories. In 2005, Sony announced their plan to stop
production of CRT displays as did Mitsubishi just about the same time.
This demise, however, adapted more slowly
in the developing world.
According to iSupply — an industry-based
statistical organization — production of
CRTs was not surpassed by LCDs until the
fourth quarter of 2007, due largely to CRT
production at factories in China (see the
graph to the right).
When was the production of CRT monitors ended?
• 2007 in China
CRTs — despite research to better the technology — always
remained relatively bulky and occupied far too much desk space in
comparison with newer display technologies such as LCD.
Consumers eager to be part of the trend showed more interest in the
emerging displays such as LCDs and plasmas. Today, the LCD has
taken on a dominant role in all areas where the CRT was once the
king.
LCD TECHNOLOGY IS ONLY 122 YEARS OLD

The liquid crystal is the driving force of the LCD, and


its discovery goes well back to 1888. Considered
more of a random occurrence while examining the
properties of cholesterol in carrots, Austrian botanist
– Fredreich Rheinizer – happened across a fourth,
liquid crystal state of matter.
Molecules, in the solid state, do not move and remain in
an ‘ordered’ fashion. In converting a solid to a liquid,
these molecules gain enough energy to break free and
move in an unrestricted and ‘unordered’ fashion.
Somewhere in between these two states, the LC state is
achieved — molecules are free to move in an ‘ordered’
fashion, so long as this is in the same direction.
For the next 80 years, liquid crystals remained
a pure scientific curiosity. Nevertheless, over
this time, other key qualities were discovered.
For one, an LC could be controlled by either a
magnetic or electric field in such a way as to
change its orientation. Second, an LC was
able to allow the transmission of light through
it. And third, it was capable of twisting that
light. The most popular liquid crystal phase,
due to its use in LCD technology, is known as
nematic. Derived from the Greek word nema
for thread, nematic LCs align in a thread-like
format when an electromagnetic field is
applied.
In the early 1960s, a way
was found to stabilize the LC
at room temperature. And in
the late in the same decade,
the twisted nematic (TN) was
developed — a nematic liquid
crystal naturally twisted 90
degrees, much like a quarter
helix.
The first LCDs produced employed passive-matrix
(PM) technology and became popular in the early
1970s in such applications as digital watches and
calculators. PM worked by using one transistor for
each column (i.e., x5) and one for each row (i.e., y4) .
switched on in sequence to activate a particular pixel
on the screen (see the figure to the right). The problem
with PM was that as the number of pixels increased on
the screen, this led to a slower response time and
poor image quality. Moreover, for colour capability, the
technology was very limited in the number of colours
displayed (i.e., 16 colours).
By the early 1980s, PM-LCDs were being used in electronic
typewriters and personal word processors. However, by the mid-
1980s, PM-LCDs were being realized as ill-suited as the screen
size increased. An improvement came in the form of Super Twisted-
Nematic (STN) LCDs that improved the picture quality, viewing angles
and contrast, which made the technology well-suited for use in laptops
and word processors. Nonetheless, STN still had visibility problems,
resulting in Double Super Twisted-Nematic (D-STN) being
developed in 1987. D-STN required the overlaying of two liquid crystal
layers to solve the problem, thereby increasing the weight, thickness
and cost of the screen. Triple Super Twisted-Nematic (T-STN)
further improved the LCD, but with added weight, thickness and cost.
Passive-matrix use in monitor screens was the norm up until the early-
1990s when active-matrix (AM) LCDs emerged as a superior display.
Super-Twisted Nematic LCDs
Twisted nematic displays rotate the director of the liquid crystal by 90 ,
but super-twisted nematic displays employ up to a 270 rotation.
This extra rotation gives the crystal a much steeper voltage-brightness
response curve and also widens the angle at which the display can be
viewed before losing much contrast. With the sharper response, it is
possible to achieve higher contrast with the same voltage selection
ratio. Therefore, the degree to which multiplexing is possible is greatly
increased. The largest common super-twist displays have up to 500
rows.
The active-matrix LCD (AM) differs from PM
technology such that each pixel has its own transistor
— known as a Thin Film Transistor (TFT) (see figure to
the right). The TFT setup allows for improved
brightness, higher contrast, better image quality and
faster response time over that of PM technology.
However, an AM-LCD requires a substantial number of
transistors to operate. For instance, take a monitor that
allows a resolution of 1600 x 1200: multiplying 1,600
columns by 1,200 rows by 3 colour subpixels yields a
requirement of 5.76 million transistors. When an LCD
is said to have a 'bad pixel', this refers to a defective
transistor on the display which, as you can imagine,
would be a virtually impossible feat to fix.
A KEY TURNING POINT FOR LCD SUPREMACY

TFTs were first developed in the United States during the


1960s, and it was later in that decade that such companies
as RCA Labs and Westinghouse came up with the idea for
using TFTs in displays and laid the foundations for today's
AM-LCD technology. While these U.S. companies were
the pioneers in the field, they ended up walking away from
the technology, and instead, placing their bets on passive
matrix. It was the Japanese that took AM technology to the
next level.
THE JAPANESE LCD REVOLUTION
It was during the 1970s and 1980s that Japanese manufacturers such
as Sharp, Toshiba, Sanyo and Matsushita picked up development on
the AM-LCD from their U.S. counterparts. They began planning record
investments in AM-LCD production capabilities. As a rule of thumb, in
order to construct a state-of-the-art AM manufacturing facility, at least
US$500 million of initial investment was required, followed by a period
of additional investments to develop a reliable manufacturing process.
For instance, in 1992, Sharp announced multi-year spending of $1.32
billion, Toshiba — $1.06 billion, Sanyo — $597.15 million, and
Matsushita — $464.45 million. A key development in Japan that lent a
huge hand to the large investment plans for these manufacturers was
that LCD component producers (such as glass manufacturers) were
extremely reactive to the production plans for the LCD, and made large
capital investments of their own in league with the LCD producers.
THE REVOLUTION SPREADS ACROSS EASTERN ASIA

By 1995, the LCDs (both AM and PM) accounted for


approximately 87% of the total flat panel display market. Of this
market, the Japanese dominated with a 90% world market share
due to their massive investments in AM-LCD technology.
Throughout the late 1990s, a number of other East Asian
countries began entering the race for AM-LCD production .
particularly, South Korea and Taiwan. South Korea was interested
as a means to replace the CRT but also given that the LCD
manufacturing process could easily take advantage of
semiconductor manufacturing infrastructure currently in place,
thereby reducing investment spending substantially.
HOW ARE THE SEMICONDUCTOR AND LCD MANUFACTURING PROCESSES SIMILAR?

The LCD manufacturing process is very similar to that of


semiconductor production. The only major difference is that
the TFT process involves glass substrates whereas the
semiconductor process involves silicon wafers. In the
semiconductor industry, the pressure to reduce costs and
increase productivity has led manufacturers to increase the
size of wafers and reducing the size of chips so that more
chips can be produced per wafer. Doing so has increased
yield by reducing the number of defects per wafer. The same
has been applied to LCD manufacturing. However, there is
one major difference. As time progresses, the screen size for
LCDs gradually gets larger whereas semiconductor wafers
get smaller.
Since the 1990s, the cost of production for LCDs has
dropped substantially. This has been achieved by
increasing the number of panels per substrate. This
requires that the substrates be increased in size in order
to increase the number of LCD panels that are produced,
and hence, increased yield. As the need for larger LCD
panels has increased, so has the need for newer
generations of manufacturing equipment. Thus far, LCD
manufacturing has gone through 10 such generations.
For example, while 7th generation can handle 40”
screens, 8th generation can provide the capability of
producing 50” screens, and 10th generation can reach as
high as 65”.
NOW BACK TO THE CRT...
Still, throughout this time, the LCD was still more costly to
manufacture compared with the CRT – this was particularly the
case given the high defect rate during the manufacturing process.
For instance, in the mid-1990s, the cost of a 20" NEC LCD was
approximately US$8,000. Compare this with a comparable 20"
Sony CRT monitor at $2,300. Moreover, the CRT maintained its
market position through innovations of its own, such as HD,
increased screen size, flat face, superb display quality and an
unbeatable cost-performance ratio. This price differential gradually
eroded until the manufacturing cost of an LCD matched that of a
CRT in the mid-2000s.
...AND FORWARD TO THE FUTURE
The LCD has quickly gained dominance as the display
technology of choice over the past few years to the
point where it has become rare to find CRT monitors in
use anywhere. The new display technology that has
started to emerge on the scene is OLED (Organic Light
Emitting Diode).
Part 2A: Competing Technologies of the LCD in the 1980s
At the same time, a number of other display technologies
entered onto the scene to compete with the LCD, such as the...
electro-luminescent, ferroelectric and plasma.

Ferroelectric (FE): These also made use of liquid


crystals but in a different phase of chiral rather than
nematic. First developed in 1986, this technology
introduced higher resolution capabilities with improved
brightness. It also allowed the liquid crystals to retain and
reorient polarization longer after an electric current was
applied. The refresh rate was also increased
substantially. The drawbacks were lower reliability and
colour gradation problems.
Electro-luminescent (EL): EL displays worked by using
an electroluminescent material that when an electric
current was applied, the material emitted radiation in the
form of light. While originally developed by Westinghouse
in the mid-1960s, the company cancelled development in
the 1970s. Japanese manufacturers continued its
development and use in display devices. Compared with
LCD technology, EL displays allowed such qualities as
longer life, wider viewing angle, higher contrast, faster
response time and the ability for larger screen size. On the
downside, the manufacturing cost was far higher than that
of the LCD, and was limited by only two companies:
Planar Systems and Sharp.
Plasma: First developed in the early 1960s, a plasma
display works by ionizing an inert gas to produce light.

Still around today, they provide excellent uniformity and


colour reproduction as well as a wide viewing angle and
the capability for large size production. On the other
hand, they consume much power, are heavier than
LCDs, and have a high risk of image burn in. For the
longest time, when it came to large screen capability,
plasma technology held a major cost advantage over
LCDs. However, with improved manufacturing
developments for LCDs, this is no longer the case.
Have you imagined a high definition (HD) television half an inch thick, measuring
200 screens and consuming less power than most televisions currently on the
market, and that can be curled up when not in use? What if the windshield of your
car could turn into a display panel, some information such as fuel amount, engine
temperature, speed, torque could be displayed in some, and a road map in some?
Or how would your dress look like a monitor? All of this may be possible with OLED
(organic light-emitting diode) technology in the near future.
Light-emitting organic diode (OLED) imager is a thin film made of organic
molecules that emit light when electrified. While OLEDs provide brighter, more
vivid images, they consume less energy than LED and LCD display panels..

You might also like