You are on page 1of 10

Processors:

Processors are probably the most single interesting piece of hardware in our computer. They have a rich
and neat history, dating all the way back to 1971 with the first commercially available microprocessor,
the Intel 4004. As we can imagine and have no doubt seen yourself, since then, technology has
improved by leaps and bounds. We’re going to see a history of the processor, starting with the Intel
8086. It was the processor IBM chose for the first PC and only has a neat history from then on out.

INTEL 8086:

CPUs have gone through many changes through the few years since Intel came out with the first one.
IBM chose Intel’s 8088 processor for the brains of the first PC. This choice by IBM is what made Intel the
perceived leader of the CPU market. Intel remains the perceived leader of microprocessor development.
While newer contenders have developed their own technologies for their own processors, Intel
continues to remain more than a viable source of new technology in this market, with the ever-growing
AMD nipping at their heels.

The first four generations of Intel processor took on the “8” as the series name, which is why the
technical types refer to this family of chips as the 8088, 8086, and 80186. This goes right on up to the
80486, or simply the 486. The following chips are considered the dinosaurs of the computer world. PC’s
based on these processors are the kind that usually sit around in the garage or warehouse collecting
dust. They are not of much use anymore, but us geeks don’t like throwing them out because they still
work.

THE PENTIUM (1993):

By this time, the Intel 486 was entrenched into the market. Also, people were used to the traditional
80×86 naming scheme. Intel was busy working on its next generation of processor. It was not to be
called the 80586, though. There were some legal issues surrounding the ability for Intel to trademark
the numbers 80586. So, instead, Intel changed the name of the processor to the Pentium, a name they
could easily trademark. They released the Pentium in 1993. The original Pentium performed at 60 MHz
and 100 MIPS. Also called the “P5” or “P54”, the chip contained 3.21 million transistors and worked on
the 32-bit address bus (same as the 486). It has a 64-bit external data bus which could operate at
roughly twice the speed of the 486. The Pentium family includes the
60/66/75/90/100/120/133/150/166/200 MHz clock speeds. The original 60/66 MHz versions operated
on the Socket 4 setup, while all of the remaining versions operated on the Socket 7 boards. Some of the
chips (75MHz – 133MHz) could operate on Socket 5 boards as well. Pentium is compatible with all of the
older operating systems including DOS, Windows 3.1, Unix, and OS/2.

THE PENTIUM PRO (1995-1999):

If the regular Pentium is an ape, this processor evolved into being human. The Pentium Pro (also called
“P6” or “PPro”) is a RISC chip with a 486hardware emulator on it, running at 200 MHz or below. Several
techniques are used by this chip to produce more performance than its predecessors. Increased speed is
achieved by dividing processing into more stages, and more work is done within each clock cycle. Three
instructions can be decoded in each clock cycle, as opposed to only two for the Pentium. In addition,
instruction decoding and execution are decoupled, meaning that instructions can still be executed if one
pipeline stops (such as when one instruction is waiting for data from memory; the Pentium would stop
all processing at this point). Instructions are sometimes executed out of order, that is, not necessarily as
written down in the program, but rather when information is available, although they won’t be much
out of sequence; just enough to make things run smoother. Such improvements to the PPro resulted in a
chip optimized for higher end desktop workstations and network servers.

PENTIUM MMX (1997):

Intel released many different flavors of the Pentium processor. One of the more improved flavors was
the Pentium MMX, released in 1997. It was a move by Intel to improve the original Pentium and make it
better serve the needs in the multimedia and performance department. One of the key enhancements,
and where it gets its name from, is the MMX instruction set. The MMX instructions were an extension
off the normal instruction set. The 57 additional streamlined instructions helped the processor perform
certain key tasks in a streamlined fashion, allowing it to do some tasks with one instruction that it would
have taken more regular instructions to do. It paid off, too. The Pentium MMX performed up to 10-20%
faster with standard software, and higher with software optimized for the MMX instructions. Many
multimedia applications and games that took advantage of MMX performed better, had higher frame
rates, etc.

MMX was not the only improvement on the Pentium MMX. The dual 8K caches of the Pentium were
doubled to 16 KB each. It also had improved dynamic branch prediction, a pipelined FPU, and an
additional instruction pipe to allow faster instruction processing. With these and other improvements,
the Pentium line of processor was extended even longer. The line lasted up until recently, and went up
to 233 MHz. While new PCs with this processor are all but non-existent, there are many older PCs still
using this processor and going strong.

PENTIUM II (1997):

Intel made some major changes to the processor scene with the release of the Pentium II. They had the
Pentium MMX and Pentium Pro’s out into the market in a strong way, and they wanted to bring the best
of both into one chip. As a result, the Pentium II is kind of like the child of a Pentium MMX mother and
the Pentium Pro Father. But like real life, it doesn’t necessarily combine the best of it’s parents. Pentium
II is optimized for 32-bit applications. It also contains the MMX instruction set, which is almost a
standard by this time. The chip uses the dynamic execution technology of the Pentium Pro, allowing the
processor to predict coming instructions, accelerating work flow. It actually analyzes program instruction
and re-orders the schedule of instructions into an order that can be run the quickest. Pentium II has
32KB of L1 cache (16KB each for data and instructions) and has a 512KB of L2 cache on package. The L2
cache runs at ½ the speed of the processor, not at full speed. Nonetheless, the fact that the L2 cache is
not on the motherboard, but instead in the chip itself, boosts performance.

One of the most noticeable changes in this processor is the change in the package style. Almost all of the
Pentium class processors use the Socket 7 interface to the motherboard. Pentium Pro’s use Socket 8.
Pentium II, however, makes use of “Slot 1”. The package-type of the P2 is called Single-Edge contact
(SEC). The chip and L2 cache actually reside on a card which attaches to the motherboard via a slot,
much like an expansion card. The entire P2 package is surrounded by a plastic cartridge. In addition to
Intel’s departure into Slot 1, they also patented the new Slot 1 interface, effectively barring the
competition from making competitor chips to use the new Slot 1 motherboards. This move, no doubt,
demonstrates why Intel moved away from Socket 7 to begin with – they couldn’t patent it.

The original Pentium II was code-named “Klamath”. It ran at a paltry 66 MHz bus speed and ranged from
233MHz to 300MHz. In 1998, Intel did some slight re-working of the processor and released
“Deschutes”. They used a 0.25micron design technology for this one, and allowed a 100MHz system bus.
The L2 cache was still separate from the actual processor core and still ran at only half speed. They
would not rectify this issue until the release of the Celeron A and Pentium III. Deschutes ran from
333MHz to up to 450 MHz.

PENTIUM III (1999):

Intel released the Pentium III “Katmai” processor in February of 1999, running at 450 MHz on a 100MHz
bus. Katmai introduced the SSE instruction set, which was basically an extension of MMX that again
improved the performance on 3D apps designed to use the new ability. Also dubbed MMX2, SSE
contained 70 new instructions, with four simultaneous instructions able to be performed
simultaneously. This original Pentium III worked off what was a slightly improved P6 core, so the chip
was well suited to multimedia applications. The chip saw controversy, though, when Intel decided to
include integrated “processor serial number” (PSN) on Katmai. the PSN was designed to be able to be
read over a network, even the internet. The idea, as Intel saw it, was to increase the level of security in
online transactions. End users saw it differently. They saw it as an invasion of privacy. After taking a hit
in the eye from the PR perspective and getting some pressure from their customers, Intel eventually
allowed the tag to be turned off in the BIOS. Katmai eventually saw 600 MHz, but Intel quickly moved on
to the Coppermine.

In April of 2000, Intel released their Pentium III Coppermine. While Katmai had 512 KB of L2 cache,
Coppermine had half that at only 256 KB. But, the cache was located directly on the CPU core rather
than on the daughtercard as typified in previous Slot 1 processors. This made the smaller cache an
actual non-issue, because performance benefited. Coppermine also took on a 0.18micron design and
the newer Single Edge Contact Cartridge 2 (SECC 2) package. With SECC 2, the surrounding cartridge only
covered one side of the package, as opposed to previous slotted processors. What’s more, Intel again
saw the logic they had when they took Celeron over to Socket 370, so they eventually released versions
of Coppermine in socket format. Coppermine also supported the 133 MHz front side bus. Coppermine
proved to be a performance chip and it was and still is used by many PCs. Coppermine eventually saw 1+
GHz.

PENTIUM IV (2000)

While we have been talking about AMD’s high-speed Athlon Thunderbirds and Palominos, Intel actually
beat AMD to the gun by releasing Pentium IV Willamette in November of 2000. Pentium IV was exactly
what Intel needed to again take the torch from AMD. Pentium IV is a truly new CPU architecture and
serves as the beginning to new technologies we will see for the next several years. The new Net Burst
architecture is designed with future speed increase in mind, meaning P4 is not going to fade away
quickly like Pentium III near the 1 GHz mark.

According to Intel, Net Burst is made up of four new technologies: Hyper Pipelined Technology, Rapid
Execution Engine, Execution Trace Cache and a 400MHz system bus.

PENTIUM 4 PRESCOTT, CELERON D AND PENTIUM D (2005):

The Pentium 4 Prescott was introduced in 2004 to mixed feelings. The Pentium 4 Prescott was the first
core to use the 90nm semiconductor manufacturing process. Many weren’t happy with it because the
Prescott was essentially a restructuring of the Pentium 4’s microarchitecture. While that’d normally be a
good thing, there weren’t too many positives. Some programs were enhanced by the doubled cache as
well as the SSE3 instruction set. Unfortunately, there were other programs that suffered because of the
longer instruction pipeline.

It’s also worth noting that the Pentium 4 Prescott was able to achieve some pretty high clock speeds,
but not nearly as high as Intel was hoping. One version of the Prescott was actually able to obtain
speeds of 3.8GHz. Eventually, Intel released a version of the Prescott supporting Intel’s 64-bit
architecture, Intel 64. To start out, these were only sold as the F-series to OEMs, but Intel eventually
renamed it to the 5×1 series, which was sold to consumers.
Intel introduced another version of the Pentium 4 Prescott, which was the Celeron D. A major difference
with them is that they sported double the L1 and L2 cache than the previous Willamette and Northwood
desktop Celerons. Not only that, but you got the SSE3 instruction set and they were manufactured on
Socket 478. The Celeron D overall was a major performance improvement over many of the previous
Net Burst-based Celerons. While there were major performance improvements across the board, it had
a huge problem — excessive heat.

Eventually, Intel would go on to refresh the Celeron D, but this time with 64-bit architecture.
Unfortunately, Intel never built these with Socket 478, but with the LGA 775 socket type.

Another one Intel made was the Pentium D. You can look at this processor as the dual-core variant of
the Pentium 4 Prescott. You obviously get all the benefits that an extra core brings, but the other
notable improvement with the Pentium D was that it could run multi-threaded applications. There were
a few different generations of the Pentium D, all featuring small and minor improvements over the
other, but the Pentium D series was eventually retired in 2008. The Pentium D had a lot of pitfalls,
including high power consumption and that a single core of the Pentium D was built on two dies (more
energy efficient CPUs and slower dual-core CPUs were on just a single die).

The true and overall better successor was the Intel Core 2 brand, which had a lot of success.

INTEL CORE 2 (2006):


The Intel Core 2 is a brand that houses a variety of different 64-bit X86-64 CPUs. This includes single-
core, dual-core and quad-core processor based on Intel’s Core microarchitecture. The Core 2 brand
encompassed a lot of different CPUs, but to give you an idea, you had the Solo (a single-core CPU), the
Duo (a dual-core CPU), Quad (a quad-core CPU) and then later on, they had Extreme (a dual- or quad-
core processor aimed at hardware enthusiasts).

The Intel Core 2 line was really the first multi-core processors. This was a necessary route for Intel to go,
as true multi-core processors are essentially a single component, but with two or more independent
processing units. They’re often referred to as cores. With multiple cores like this, Intel is able to increase
overall speed for programs, and therefore, opening the path to more demanding programs as we could
see today. That’s not to say Intel or AMD are responsible for demanding programs today, but without
high-end processors and breakthroughs in technology by them, we really wouldn’t have the hardware
that can run those programs.

Core 2 branded processors came with a lot of neat technology. For instance, you had Intel’s own
virtualization technology, 64-bit architecture, low power, and SSE4 (Streaming SIMD Extensions 4, a
processor instruction set).

INTEL CORE I3, CORE I5, AND CORE I7 (2008 – PRESENT):

Truth be told, there’s nothing more confusing than Intel’s name convention here: Core i3, Core i5 and
Core i7. What is that supposed to even mean? It’s confusing — particularly to the lay person — but
hopefully I can give you the difference between the three tiers in plain language.

You can look at the Intel Core i3 as Intel’s lowest tier processor line here. With the Core i3, you’ll get two
cores (so, dual-core), hyperthreading technology, a smaller cache and more power efficiency. This
makes it cost a whole lot less than a Core i5, but in turn, it’s worse than a Core i5 as well.

The Core i5 is a tad bit more confusing. In mobile applications, the Core i5 has two cores and
hyperthreading. Desktop variants have 4 cores (quad-core), but no hyperthreading. With it, you get
improved onboard graphics as well as Turbo Boost, a way to temporarily accelerate processor
performance when you need a little more-heavy lifting.

And that brings us to the Core i7. All Core i7 processors feature the aforementioned hyperthreading
technology missing from the Core i5. But, a Core i7 can have anywhere from two cores in a mobile
application (i.e. an ultra-book) all the way up to a whopping 8 cores in a workstation. Most commonly in
the real-world, you’ll no doubt mostly see quad-core variations. Not only that, but the Core i7 can
support as little as 2 memory sticks all the way up to 8.

The dual-core variants can have a TDP of 10W, but the 8-core workstation variants can go all the way up
to a TDP of 130W. And, since the Core i7 is Intel’s highest tier processor in this series, you can expect
even better onboard graphics, a more efficient and faster Turbo Boost as well as a larger cache. That
said, the Core i7 will be the most expensive processor variant.

FIRST GENERATION (1942 - 1955)

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often
enormous, taking up entire rooms. First generation computers relied on machine language to perform
operations, and they could only solve one problem at a time.

The Mark-I, EDSAC, EDVAC, UNIVAC-I and ENIAC computers are examples of first-generation computing
devices. It was very expensive to operate and in addition to using a great deal of electricity, generated a
lot of heat, which was often the cause of malfunctions.

Vacuum tubes used to calculate and store information, these computers were also very hard to
maintain. First generation computers also used punched cards to store symbolic programming
languages. Most people were indirectly affected by this first generation of computing machines and
knew little of their existence.
IMPORTANT MACHINES:

Mark-I, EDSAC, EDVAC, UNIVAC-I and ENIAC

ADVANTAGES:

 After long history of computations, the 1G computers are able to process any tasks in
milliseconds.
 The hardware designs are functioned and programmed by machine languages (Languages close
to machine understanding).
 Vacuum tube technology is very much important which opened the gates of digital world
communication.

DISADVANTAGES:

 Size of that machines are very big


 Required large amount of energy for processing
 Very expensive
 Heat generated and need air conditioning.
 Not portable ( never take from one place to other)
 Comparing with 5G computers, these computers are slow in speed.
 Not reliable
 In order to get proper processing, maintenance is required continuously.

SECOND GENERATION (1942 - 1955)

Transistors replaced vacuum tubes and ushered in the second generation computer. Transistor is a
device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented
in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including
computers. Today's latest microprocessor contains tens of millions of microscopic transistors.

Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many
disadvantages. They were much larger, required more energy, dissipated more heat, and were more
prone to failures. It's safe to say that without the invention of transistors, computing as we know it
today would not be possible.
The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The
transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper,
more energy-efficient and more reliable than their first-generation predecessors. Though the transistor
still generated a great deal of heat that subjected the computer to damage, it was a vast improvement
over the vacuum tube. Second-generation computers still relied on punched cards for input and
printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or assembly,
languages, which allowed programmers to specify instructions in words. High-level programming
languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in their memory, which moved from a
magnetic drum to magnetic core technology. The first computers of this generation were developed for
the atomic energy industry.
IMPORTANT MACHINES:
IBM 7074 series, CDC 164, IBM 1400 Series.

ADVANTAGES:

 If we compare it with G1 computer, less expensive and smaller in size.


 Fast in speed
 Less head generated as G1 computers generate more.
 Need low power consumption
 Language after machine language for programming, in G2 assembly language (COBOL, FORTRON) is
introduced for programming.
 Portable.

DISADVANTAGES:

 Maintenance of machine is required.


 Air conditioning required still as heat causes to process slowly.
 These computers are not used as personal system.
 Preferably used for commercial purposes
THIRD GENERATION (1964 - 1975)
The development of the Integrated Circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to run many
different applications at one time with a central program that monitored the memory. Computers for
the first time became accessible to a mass audience because they were smaller and cheaper than their
predecessors.

IMPORTANT MACHINES:

IBM System/360 & IBM 370, PDP-8, DEC, UNIVAC 1108, UNIVAC 9000.

ADVANTAGES:

 Smaller in size
 Low cost then previous
 Low power consumption
 Easy to operate
 Portable
 Input devices introduced and that make user easy to interact with it like keyboard, mouse etc
 External Storage medium introduced like floppy & tape.

DISADVANTAGES:

 IC chips are still difficult to maintain


 Need complex technology.

FOURTH GENERATION (1975 ONWARDS)

The Microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in
the palm of the hand.

The Intel 4004 chip, developed in 1971, located all the components of the computer—from the central
processing unit and memory to input/output controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers

and into many areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks

, which eventually led to the development of the Internet. Fourth generation computers also saw the
development of GUIs, the mouse and handheld devices.

IMPORTANT MACHINES:

Intel processors, AMD processor based machines

ADVANTAGES:

 Smaller in size
 High processing speed
 Very reliable
 For general purpose
 More external storage mediums are introduced like CD-ROM, DVD-ROM.
 GUIs developed for interaction

FIFTH GENERATION (1980 ONWARDS)

Fifth generation computing devices, based on Artificial Intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today.

The use of parallel processing and superconductors is helping to make artificial intelligence a
reality. Quantum computation and molecular and nanotechnology will radically change the face of
computers in years to come.

The goal of fifth-generation computing is to develop devices that respond to natural language input and
are capable of learning and self-organization.

IMPORTANT MACHINES:

ULAIC Technology, Artificial intelligence etc.

PROPERTIES

 Program independent.
 Have thinking and analysis by its own.
 Voice reorganization & biometric devices.
 Self-organization and learning.

You might also like