You are on page 1of 37

Microprocessor Systems

Ivy Magdali
Angelo Porcina
Rodolfo Luga II
Introduction
History of Computation
• The earliest known tool for use in
computation is the Sumerian abacus, and it
was thought to have been invented
in Babylon c. 2700–2300 BC.
• Its original style of usage was by lines drawn in
sand with pebbles. Abaci, of a more modern
design, are still used as calculation tools today.
This was the first known computer and most
advanced system of calculation known to date
- preceding Greek methods by 2,000 years.
• Around 200 BC the development of gears had
made it possible to create devices in which the
positions of wheels would correspond to
positions of astronomical objects. By about 100
AD Hero of Alexandria had described an
odometer-like device that could be driven
automatically and could effectively count in
digital form. But it was not until the 1600s that
mechanical devices for digital computation
appear to have actually been built.
• The Antikythera mechanism is believed to be the
earliest known mechanical analog computer. It
was designed to calculate astronomical positions.
It was discovered in 1901 in
the Antikythera wreck off the Greek island of
Antikythera, between Kythera and Crete, and has
been dated to circa 100 BC.
• Through the advancement and understanding of gears
and applying it to mathematical problems, the Step
reckoner (or Stepped reckoner) was
a digital mechanical calculator invented by the German
mathematician Gottfried Wilhelm Leibniz around 1672
and completed in 1694. It was the first calculator that
could perform all four arithmetic operations.
• Time progressed and humans’ need for
computing large amount of numbers arose.
• Many prints of tables containing useful
information in computing where produced but
the historical difficulty in producing error-free
tables by teams of mathematicians
and human "computers" spurred Charles
Babbage's desire to build a mechanism to
automate the process. Thus creating a
machine called the “Difference Engine”,
• A difference engine, created by Charles
Babbage, is an automatic mechanical
calculator designed to tabulate polynomial
functions. Its name is derived from the method
of divided differences, a way to interpolate or
tabulate functions by using a small set
of polynomial coefficients.
• During the construction of the Difference
Engine, Charles Babbage imagined an even
more complex machine: “The Analytical
Engine”.
• Unlike the difference engine, the analytical
engine was a “general purpose computer”
that can be used in many computational
problems.
• Unfortunately, the engine was ahead of its
time and was never created, but this sparked
the idea of an “automatic computer”.
• One of the early computation device that
used electronic was the
Electromechanical Punched Card
Tabulator by Herman Hollerith which
combines mechanical and electronic to
function. This was made to assist in
summarizing information particularly in
the census of the US constitution which
required fast and efficient computational
power.
• The discovery of vacuum tubes made possible
the invention of electronically powered
computers. The first large scale use of vacuum
tubes for computing was the Colossus MK 1
designed by Engr. Tommy Flowers.
• ENIAC – or Electrnonic Numerical Integrator
and Calculator, designed by John Mauchly
and J. Presper Eckert. This was the world’s first
truly general purpose machine which is
programmable, electronic, general purpose
computational device.
• Because vacuum tube were fragile, costly, and
unreliable to be used in creating computers.
Scientist had to came up with an alternative to
be used as an electronic switch for computers,
and thus the transistor was invented.

• Transistor was invented in Bell Laboratory by


scientists John Bardeen, Walter Brattain and
William Shockley which would change the era
of modern computing.
• The first computer to implement transistor
was the IBM 608 released in 1957.
• It contained 3000 transistor and could
perform 4500 addition every second.
Intel microprocessor Family
Intel 4004
• The first microprocessor sold by Intel was the four-bit
4004 in 1971. It was designed to work in conjunction
with three other microchips, the 4001 ROM, 4002
RAM, and the 4003 Shift Register. Whereas the 4004
itself performed calculations, those other components
were critical to making the processor function. The
4004 was mostly used inside of calculators and similar
devices, and it was not meant for use inside of
computers. Its max clock speed was 740 kHz.
Intel 8008 And 8080
• The 4004 made a name for Intel in the microprocessor
business, and to capitalize on the situation, Intel
introduced a new line of eight-bit processors. The 8008
came first in 1972, followed by the 8080 in 1974 and
the 8085 in 1975. Although the 8008 was the first
eight-bit processor produced by Intel, it is not as
notable as its predecessor or its successor, the 8080.
8086: The Beginning Of x86
• Intel's first 16-bit processor was the 8086, which
helped to boost performance considerably
compared to earlier designs. Not only was it
clocked higher than the budget-oriented 8088,
but it also employed a 16-bit external data bus
and a longer six-byte prefetch queue.
80186 And 80188
• Intel followed up the 8086 with several other
processors, all of which used a similar 16-bit
architecture. The first was 80186, aimed at embedded
applications. To facilitate this, Intel integrated several
pieces of hardware typically found on the motherboard
into the CPU, including the clock generator, interrupt
controller, and timer.
80386: x86 Turns 32-bit
• Intel's first 32-bit x86 processor was the 80386, released in 1985.
One key advantage that this processor had was its 32-bit address
bus that allowed it to support up to 4GB of system memory.
Although this was far more than anyone was using at the time, RAM
limitations often hurt the performance of prior x86 and competing
processors. Unlike modern CPUs, at the time the 80386 was
released, more RAM almost always translated into a performance
increase.
P5: The First Pentium
• The Pentium emerged in 1993 as the first Intel x86
processor that didn't follow the 80x86 number system.
Internally, the Pentium used the P5 architecture, which was
Intel's first x86 superscalar design. Although the Pentium
was generally faster than the 80486 in every way, its most
prominent feature was a substantially improved FPU. The
original Pentium's FPU was more than ten times faster than
the 80486's aging unit.
P6: The Pentium Pro
• Intel planned to quickly follow the Pentium up with the
Pentium Pro based on its P6 architecture, but ran into
technical difficulties. The Pentium Pro was considerably
faster than the Pentium in 32-bit operations thanks to its
Out-of-Order (OoO) design. It featured a significantly
redesigned internal architecture that decoded instructions
into micro-ops, which were then executed on general-
purpose execution units.
P6: Pentium II
• Intel didn't give up on the P6 architecture, but instead
waited until 1997 when it released the Pentium II. The
Pentium II managed to overcome nearly all of the
negative aspects of the Pentium Pro. Its underlying
architecture was similar to the Pentium Pro, and it
continued to use a 14-stage pipeline with several
enhancements to the core to improve IPC.
P6: Pentium III And The Race To 1 GHz
• Intel planned to follow up the Pentium II with
a processor based on its Netburst
architecture, but it wasn't quite ready.
Instead, Intel pushed the P6 architecture out
again as the Pentium III.
P5 And P6: Celeron And Xeon
• Around the release of the Pentium II, Intel also introduced
its Celeron and Xeon product lines. These products used
the same core as the Pentium II or Pentium III, but with
varying amounts of cache. The first Celeron-branded
processors based on the Pentium II had no L2 cache at all,
which resulted in horrible performance. Later models
based on the Pentium III had half of the L2 cache disabled
compared to their Pentium III counterparts.
Reasons behind Microprocessor
Technology
What is a Microprocessor?
• Microprocessors are integrated electrical
circuits that are capable of carrying out the
instructions of a computer software
application or program. These electrical
circuits receive instructions from the
application or program, interpret these
instructions and execute an action or set of
actions, based upon what the instructions call
for.
Importance
• A device that uses a microprocessor is normally
capable of many functions, such as word
processing, calculation, and communication via
Internet or telephone. However, for the device to
work properly, the microprocessor itself has to
communicate with other parts of the device. For
example, a microprocessor would need to
communicate with the video display to control
the output data that a program may produce.
Therefore, a microprocessor would act as device's
"brain" in that it transmits, receives and
interprets the data needed to operate a device.
Main Components of Microprocessor-
Based Systems
This is about as simple as a microprocessor gets.
This microprocessor has:

• An address bus (that may be 8, 16 or 32 bits wide)


that sends an address to memory
• A data bus (that may be 8, 16 or 32 bits wide)
that can send data to memory or receive data
from memory
• An RD (read) and WR (write) line to tell the
memory whether it wants to set or get the
addressed location
• A clock line that lets a clock pulse sequence
the processor
• A reset line that resets the program counter to
zero (or whatever) and restarts execution
• Let's assume that both the address and data
buses are 8 bits wide in this example.
Here are the components of this simple
microprocessor:
• Registers A, B and C are simply latches made
out of flip-flops.
• The address latch is just like registers A, B and
C.
• The program counter is a latch with the extra
ability to increment by 1 when told to do so, and
also to reset to zero when told to do so.
Microprocessor Instructions
• Even the incredibly simple microprocessor shown
in the previous example will have a fairly large set
of instructions that it can perform. The collection
of instructions is implemented as bit patterns,
each one of which has a different meaning when
loaded into the instruction register. Humans are
not particularly good at remembering bit
patterns, so a set of short words are defined to
represent the different bit patterns. This
collection of words is called the assembly
language of the processor. An assembler can
translate the words into their bit patterns very
easily, and then the output of the assembler is
placed in memory for the microprocessor to
execute.
Functions of a Microprocessor
• Microcontrollers are used in automatically controlled
products and devices, such as automobile engine
control systems, implantable medical devices, remote
controls, office machines, appliances, power tools, toys
and other embedded systems. By reducing the size and
cost compared to a design that uses a separate
microprocessor, memory, and input/output devices,
microcontrollers make it economical to digitally control
even more devices and processes. Mixed signal
microcontrollers are common, integrating analog
components needed to control non-digital electronic
systems. In the context of the internet of things,
microcontrollers are an economical and popular means
of data collection, sensing and actuating the physical
world as edge devices.
Different Applications of
Microcontroller
• Consumer Electronics Products: Toys, Cameras, Robots,
Washing Machine, Microwave Ovens etc. [ ...
• Instrumentation and Process Control: Oscilloscopes,
Multi-meter, Leakage Current Tester, Data Acquisition
and Control etc.
• Medical Instruments: ...
• Communication: ...
• Office Equipment: ...
• Multimedia Application: ...
• Automobile:
• Etc.

You might also like