Professional Documents
Culture Documents
OSUN STATE.
COURSE TITLE: COMPUTER TROUBLESHOOTING/HARDWARE II
COURSE CODE: COM 226
PROJECT TOPIC: C.P.U (CENTRAL PROCESSING UNIT)
GROUP: GROUP FOUR (4)
SUB-GROUP: GROUP TWO (2)
SUB-GROUP TOPIC: OPERATION OF THE COMPUTER C.P.U
SUB-GROUP MEMBERS:
ADEGBENRO FATAI WP/21/COM/002
ASHANIKE KEHINDE WP/21/COM/005
OYELEKE SHIMIYAT WP/21/COM/019
COURSE LECTURER
MRS. ADELEKE T.M.
TABLE OF CONTENT
CHAPTER ONE
1.0 What is a C.P.U?
1.1 The Brain of the Computer
1.2 History of the C.P.U
1.3 What does a C.P.U actually do?
CHAPTER TWO
2.0 Components of C.P.U
2.1 Control Unit
2.2 Arithmetic Logic Unit (A.L.U)
2.3 Register
2.4 Cache
2.5 Buses
2.6 Clock
CHAPTER THREE
3.0 characteristics of C.P.U
3.1 Features of C.P.U
3.2 Instruction Set
3.3 types of Instruction Set
3.4 Importance of Instruction Set
3.5 How are Instruction command Set used
CHAPTER FOUR
4.0 Clock speed
4.1 Functions of C.P.U in the computer
4.2 Conclusion
4.3 Reference
CHAPTER ONE
INTRODUCTION
1.0 HISTORY OF THE CPU
EDVAC, is one of the first stored-program computers.
Early computers such as the ENIAC had to be physically rewired to perform
different tasks, which caused these machines to be called "fixed-program
computers". The "central processing unit" term has been in use since as early as
1955.Since the term "CPU" is generally defined as a device for software (computer
program) execution, the earliest devices that could rightly be called CPUs came
with the advent of the stored-program computer.
The idea of a stored-program computer had been already present in the design of J.
Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so
that it could be finished sooner. On June 30, 1945, before ENIAC was made,
mathematician John von Neumann distributed a paper entitled First Draft of a
Report on the EDVAC. It was the outline of a stored-program computer that would
eventually be completed in August 1949. EDVAC was designed to perform a
certain number of instructions (or operations) of various types. Significantly, the
programs written for EDVAC were to be stored in high-speed computer
memory rather than specified by the physical wiring of the computer. This
overcame a severe limitation of ENIAC, which was the considerable time and
effort required to reconfigure the computer to perform a new task. With von
Neumann's design, the program that EDVAC ran could be changed simply by
changing the contents of the memory. EDVAC was not the first stored-program
computer; the Manchester Baby, which was a small-scale experimental stored-
program computer, ran its first program on 21 June 1948 and the Manchester Mark
1 ran its first program during the night of 16–17 June 1949.
Early CPUs were custom designs used as part of a larger and sometimes distinctive
computer. However, this method of designing custom CPUs for a particular
application has largely given way to the development of multi-purpose processors
produced in large quantities. This standardization began in the era of
discrete transistor mainframes and minicomputers, and has rapidly accelerated with
the popularization of the integrated circuit (IC). The IC has allowed increasingly
complex CPUs to be designed and manufactured to tolerances on the order
of nanometers. Both the miniaturization and standardization of CPUs have
increased the presence of digital devices in modern life far beyond the limited
application of dedicated computing machines. Modern microprocessors appear in
electronic devices ranging from automobiles to cell phones, and sometimes even in
toys.
While von Neumann is most often credited with the design of the stored-program
computer because of his design of EDVAC, and the design became known as
the von Neumann architecture, others before him, such as KonradZ use, had
suggested and implemented similar ideas. The so-called Harvard architecture of
the Harvard Mark I, which was completed before EDVAC, also used a stored-
program design using punched paper tape rather than electronic memory. The key
difference between the von Neumann and Harvard architectures is that the latter
separates the storage and treatment of CPU instructions and data, while the former
uses the same memory space for both. Most modern CPUs are primarily von
Neumann in design, but CPUs with the Harvard architecture are seen as well,
especially in embedded applications; for instance, the Atmel AVR microcontrollers
are Harvard-architecture processors.
Relays and vacuum tubes (thermionic tubes) were commonly used as switching
elements; a useful computer requires thousands or tens of thousands of switching
devices. The overall speed of a system is dependent on the speed of the
switches. Vacuum-tube computers such as EDVAC tended to average eight hours
between failures, whereas relay computers such as the slower but earlier Harvard
Mark I failed very rarely. In the end, tube-based CPUs became dominant because
the significant speed advantages afforded generally outweighed the reliability
problems. Most of these early synchronous CPUs ran at low clock rates compared
to modern microelectronic designs. Clock signal frequencies ranging from
100 kHz to 4 MHz were very common at this time, limited largely by the speed of
the switching devices they were built with.
As I mentioned, the CPU is similar to the human brain. Every single operation that
you do with your computer is processed in the CPU. The performance of your
computer is based on simple mathematical operations, and the CPU is the device
that controls all of those operations.
Let's say we are using a calculator to add two numbers. You enter the numbers
using your keyboard. The keyboard controller turns all of that information into
binary code. Binary code consists of sequences of 0 and 1. This information is then
sent to the registry and then transferred to the CPU. The CPU has an integrated
ALU (arithmetical logical unit). The ALU is responsible for all mathematical and
logical operations.
Your request to add two numbers comes to the CPU and is transferred to the ALU.
The ALU adds the binary numbers and returns the answer to the CPU, which
transfers the answer to an output device.
Adding two numbers is a very simple example, but it illustrates the basic functions
of the CPU. Every single step you perform on your computer is in one way or
another connected to this central unit, so it is very important to keep your processor
in good form. Overheating, especially, can lead your CPU to fail.
The control unit is the main component of a central processing unit (CPU) in
computers that can direct the operations during the execution of a program by the
processor/computer. The main function of the control unit is to fetch and execute
instructions from the memory of a computer. It receives the input
instruction/information from the user and converts it into control signals, which are
then given to the CPU for further execution. It is included as a part of Von
Neumann architecture developed by John Neumann. It is responsible for providing
the timing signals, and control signals and directs the execution of a program by
the CPU. It is included as an internal part of the CPU in modern computers.
Control Unit Block Diagram
Hardwired
Here, the signals are generated by special hardwired logic circuits. This type of
unit is difficult to modify. They are also quite expensive and are not capable of
handling complex instructions. This is used by those computers which use RISC
architecture.
Micro programmed
In this type, the signals are generated using microinstructions which are stored in
the control memory. It is easier to modify and is also less expensive. However, it is
slower as compared to a hardwired type. It is capable of handling complex
instruction and is used in devices that use CISC architecture. The micro
programmed type is again of two types:
Horizontal Micro programmed
The control signals will be represented in the form of a decoded binary format
which is 1 bit/ CS. It supports a high degree of parallelism and is mostly used in
parallel processing applications.
The control signals will be represented in the form of an encoded binary format. If
we use N control signals, then the number of bits required with be log2(N). The
degree of parallelism is low. Also, the speed of this unit is comparatively slow.
The Arithmetic and Logical Unit is responsible for arithmetical and logical
calculations as well as taking decisions in the system. It is also known as the
mathematical brain of the computer. The ALU makes use of registers for the
calculations. It takes input from input registers, performs operations on the data,
and stores the output in an output register.
An arithmetic logic unit (ALU) is a major component of the central processing
unit of the a computer system. It does all processes related to arithmetic and logic
operations that need to be done on instruction words. In some microprocessor
architectures, the ALU is divided into the arithmetic unit (AU) and the logic unit
(LU).
An ALU can be designed by engineers to calculate many different operations.
When the operations become more and more complex, then the ALU will also
become more and more expensive and also takes up more space in the CPU and
dissipates more heat. That is why engineers make the ALU powerful enough to
ensure that the CPU is also powerful and fast, but not so complex as to become
prohibitive in terms of cost and other disadvantages.
ALU is also known as an Integer Unit (IU). The arithmetic logic unit is that part of
the CPU that handles all the calculations the CPU may need. Most of these
operations are logical in nature. Depending on how the ALU is designed, it can
make the CPU more powerful, but it also consumes more energy and creates more
heat. Therefore, there must be a balance between how powerful and complex the
ALU is and how expensive the whole unit becomes. This is why faster CPUs are
more expensive, consume more power and dissipate more heat.
Different operation as carried out by ALU can be categorized as follows:
Logical operations: These include operations like AND, OR, NOT, XOR,
NOR, NAND, etc.
Bit-Shifting Operations: This pertains to shifting the positions of the bits by
a certain number of places either towards the right or left, which is
considered a multiplication or division operations.
Arithmetic operations This refers to bit addition and subtraction. Although
multiplication and division are sometimes used, these operations are more
expensive to make. Multiplication and subtraction can also be done by
repetitive additions and subtractions respectively.
Functions of an ALU
The arithmetic logic unit performs numerous functions.
1. Basic mathematical operations
It executes arithmetic operations such as addition, subtraction, and multiplication.
It may include X and Y as carry-in or carry-out options, which occurs when the
result cannot fit in a 1-bit binary structure. Similarly, it can subtract B from A or
vice versa with the difference at Y and carry-in or carry-out.
2. Advanced mathematical operations
The ALU can also perform advanced operations like increments and decrements.
When X or Y is raised by 1, and the output reflects the new value, this is an
increment. Similarly, decrement occurs when X or Y is lowered by 1, with the
output representing the new value.
3. Logic operations
It executes logical operations such as AND, OR, X-OR, NOT, etc. These are
essentially instructions or rules that help the ALU determine whether something is
true or false given a set of conditions. AND and OR functions allow the arithmetic
logic unit to understand how two conditions work together. It also plays an
important role in logic calculations, such as fuzzy logic.
4. Bit shift operations
This function relates to ALU shift operations, whereby one can shift the X (or Y)
operand left or right and output the shifted operand. Typically, simple ALUs can
only move the operand by a one-bit position. Complex ALUs use barrel shifters,
which enable processors to move the operand by a definite number of bits in a
single operation.
5. Data checks through special value
ALUs may also provide distinct outputs. They create different signals that are
frequently used to regulate program branching. Traditional ones include Z (zero),
C (carry), N (negative), or V. (signed carry). These values may provide a wealth of
information. For instance, if the result of subtracting two integers is 0, both must
be identical. This is equally applicable to larger than and less than calculations.
6. Transferring data to and from registers
The arithmetic logic unit accepts the accumulator or temporary register’s inputs.
Additionally, the result is stored in the accumulator (the temporary register).
Lastly, it delivers outcome states to the signal register.
7. Specially programmed operations
Engineers can also program the ALU to perform any operation of their choice,
such as in the case of supercomputers. However, as the complexity of the
operations increases, the ALU becomes much more expensive as it generates more
heat and occupies more space on the CPU. Therefore, there needs to be a balance
between the complexity and strength of ALU and its cost. The fundamental reason
faster CPUs are much more expensive is that their ALUs use more energy and
generate more heat.
2.3 REGISTERS
Registers are part of a computer’s memory that is used to store the instructions
temporarily to provide the processor with the instructions at times of need. These
registers are also known as Processor registers as they play an important role in the
processing of data. These registers store data in the form of memory address and
after the processing of the instruction present at that memory address is completed,
it stores the memory address of the next instruction. There are various kinds of
registers that perform different functions.
Types of Register
A processor often contains several kinds of registers, which can be classified
according to the types of values they can store or the instructions that operate on
them:
2.4 CACHE
The cache is a type of Random Access Memory that stores small amounts of data
and instructions temporarily which can be reused as and when required. It reduces
the amount of time needed to fetch the instructions as instead of fetching it from
the RAM, it can be directly accessed from Cache in a small amount of time.
Functions of Cache:
They reduce the amount of time needed to fetch and execute instructions.
They store data temporarily for later use.
2.5 BUSES
A bus is a link between the different components of the computer system and the
processor. They are used to send signals and data from the processor to different
devices and vice versa. There are three types of buses – Address bus which is used
to send memory address from process to other components. The data bus, which is
used to send actual data from the processor to the components, and the Control
bus, used to send control signals from the processor to other devices.
Functions of Bus:
2.6 CLOCK
As the name suggests, the clock controls the timing and speed of the functions of
different components of the CPU. It sends out electrical signals which regulate the
timing and speed of the functions.
Functions of Clock:
CHAPTER THREE
The model and manufacturer of a processor are the most distinctive elements
(AMD or Intel). Despite the fact that the CPUs from the two companies have
similar features and performance, an AMD processor cannot be placed in an Intel-
compatible motherboard and vice versa.
Socket type:
The socket that a CPU is meant to fit into is another differentiating characteristic.
If you want to replace a CPU on a Socket 478 motherboard, for example, you must
buy a processor designed for that socket.
Speed of the clock:
The host-bus speed, also known as the front-side bus speed, FSB speed, or simply
FSB, specifies the data transmission rate between the CPU and the chipset. Even
while running at the same clock speed, a faster host-bus speed contributes to
greater processor performance.
Processors use two types of cache memory to improve speed by buffering sluggish
transfers between the CPU and main memory. Layer 1 cache (also known as Level
1 cache) size is a CPU architectural feature that cannot be changed without
rebuilding the chip. However, because Layer 2 cache (also known as Level 2 cache
or L2 cache) is located outside of the CPU core, processor manufacturers can offer
the same processor with different L2 cache sizes.
2. Cores in CPU:
Now a day's processors are designed with multi-cores. These cores within a CPU
are independent components used for parallel processing to increase overall
efficiency of the computer system for the workload management. Each core is as
good as the other cores in the CPU. Each has its own cache memory, but can
communicate with other CPU cores when required.
3. Speed of CPU:
4. Multithreading in CPU:
All new generation processors support parallel processing due to multithreading. In
multithreading, there exists two logical cores in each physical core of a CPU that
works in parallel. It speeds up the entire process by increasing number of cores
available to the workload. Multithreaded CPU's are commonly used in virtualized
environments where administrators use to assign dedicated workload to different
logical cores.
5. Compatibility of CPU:
A processor should support memory modules of different types like DDR1, DDR2
and DDR3 and it should be compatible to the motherboards designed by different
companies. A manufacturing company design motherboards and memory modules
keeping the processor's compatibility in mind.
6. Bandwidth of CPU:
CPUs need a circuitry to communicate with input/output devices and memory. The
PCI slots on the motherboard communicate with the PCI cards, USB controllers
communicate with the usb devices such as mouse, keyboard,printeretc and memory
controller communicate with the main memory. The speed at which this
communication take place is known as bandwidth which differs from CPU to CPU.
Multi-core processors have higher bandwidth than single core processors. AMD
processors generally have higher bandwidth than Intel processors because they
have inbuilt memory and i/o controller.
All CPUs have instruction sets that enable commands directing the CPU to switch
the relevant transistors. The instructions tell the CPU to perform tasks. Some
instructions are simple read, write and move commands that direct data to different
hardware elements.
The instructions are made up of a specific number of bits. For instance, The CPU's
instructions might be 8 bits, where the first 4 bits make up the operation code that
tells the computer what to do. The next 4 bits are the operand, which tells the
computer the data that should be used.
The length of an instruction set can vary from as few as 4 bits to many hundreds.
Different instructions in some instruction set architectures (ISAs) have different
lengths. Other ISAs have fixed-length instructions.
instructions
data types
processor registers
main memory hardware
input/output model
addressing modes
3.5 HOW ARE INSTRUCTION SET COMMANDS USED?
The following are three main ways instruction set commands are used:
1. Data handling and memory management. Instruction set commands are used
when setting a register to a specific value, copying data from memory to a
register or vice versa, and reading and writing data.
2. Arithmetic and logic operations and activities. These commands
include add, subtract, multiply, divide and compare, which examines values in
two registers to see if one is greater or less than the other.
3. Control flow activities. One example is branch, which instructs the system to
go to another location and execute commands there. Another is jump, which
moves to a specific memory location or address.
CHAPTER FOUR
The CPU retrieves instructions from memory. Each instruction is a small piece
of code that tells the CPU what operation to perform. The instructions are stored in
memory in a sequential order, and the CPU reads them one by one.
2. Decoding instructions
Once an instruction has been fetched from memory, the CPU decodes it. This
involves analyzing the instruction to determine what operation needs to be
performed and what data is needed for the operation.
3. Executing instructions
After an instruction has been decoded, the CPU executes it. This involves
performing the operation specified by the instruction, using the data stored in the
registers or memory.
The CPU manages the order in which instructions are executed. This is done using
a program counter, which keeps track of the address of the next instruction to be
executed.
The CPU controls the flow of data between the computer and its peripherals. This
involves sending and receiving data to and from input/output devices such as
keyboards, mice, and printers.
6. Managing interrupts
The CPU handles interruptions from the system or from peripherals. An interrupt is
a signal that the CPU receives from a peripheral device, such as a keyboard or
mouse, that requires the CPU’s attention. When an interrupt occurs, the CPU stops
executing its current instructions and handles the interrupt before resuming its
normal operation.
The CPU performs mathematical and logical operations on data. This includes
operations such as addition, subtraction, multiplication, division, and comparison
of values.
The CPU manages the allocation and use of memory by the computer system. This
involves keeping track of which areas of memory are being used and which areas
are available for use.
9. Managing system resources
The CPU manages the use of system resources, such as the system clock and other
hardware components. The CPU ensures that these resources are used efficiently
and that they are available when needed.
The CPU communicates with other components in the computer system, such as
the memory, input/output devices, and other processors. This involves sending and
receiving data and instructions to and from these components.
In addition to these functions, the CPU also performs a variety of other tasks, such
as managing power consumption, handling errors, and supporting virtual memory.
4.2 CONCLUSION
Overall, the CPU is the most important component of a computer system, as it
performs the majority of processing tasks required by the system. Its functions are
critical to the operation of the computer, and its performance can have a significant
impact on the overall performance of the system. As technology continues to
advance, CPUs are becoming faster and more powerful, allowing for even more
complex tasks to be performed by computer systems.
4.3 REFERENCE
www.elprocus.com
www.thecrazyprogrammer.com
www.spiceworks.com