Professional Documents
Culture Documents
1. Input/Output(I/O) Unit.
2. Central Processing Unit(CPU).
3. Memory Unit.
Input Unit
The Computer accepts Coded information through input unit by the user. It is a device that is
used to give required information to the computer. e.g., keyboard, mouse, etc.
Output Unit
The output sends the processed results to the user. It is Mainly used to display the desired
results to the user. It is mainly used to display the desired result to the user as per input
instruction. e.g , video monitor, printer and plotter, etc.
The central processing unit consists set of register, arithmetic and control circuits, which
together interpret and execute instructions in assembly language. The primary functions of the
CPU are
1. The CPU transfers instructions and input data from main memory to registers. ie.,
internal memory.
2. The CPU executes the instructions in the stored sequence.
3. When necessary, CPU transfers output data from registers to main memory.
Central Processing Unit (CPU) is often called the brain of computer. The CPU is fabricated as a
single Integrated Circuit(IC) chip and is also known as Microprocessor . A CPU controls all the
internal and external devices and performs arithmetic and logic operations. The CPU consists of
three main subsystems; Arithmetic Logic Unit (ALU), Controls Unit(CU) and registers.
The arithmetic logic unit contains the electronic circuitry that executes all arithmetic and logical
operations on the available data. It is used to perform all arithmetic calculations (addition,
subtraction, multiplication and division) and logical calculation (<,>.+, AND, OR, etc). Logical unit
performs comparison of numbers, letters and special characters. ALU uses registers to hold the
data that is being processed.
Registers
Registers are special purpose and high speed temporary memory units. Registers are not
referenced by their address, but are directly accessed and manipulated by the CPU during
execution. Essentially, they hold the information that the CPU is currently working on. Registers
store data, instructions, address and intermediate results of processing.
Control Unit(CU)
Control Unit coordinates with the input and output devices of a computer. It directs the
computer to carry out stored program instructions by communicating with the ALU and the
registers. It organises the processing of data and instructions.
To maintain the proper sequence of processing data, the control unit uses clock inputs. The
basic function of control unit is to fetch the instruction stored in the devices involved in it and
accordingly generate control signals.
Memory Unit
Memory is that part of the computer, WHich holds data and instructions, Memory is an integral
component of the CPU. The memory unit consists of Primary memory and Secondary.
Primary Memory
Primary memory or main memory of the computer is used to store the data and instructions
during
execution of the instructions. The primary memory is of two types; Random Access
memory(RAM) and Read Only Memory (ROM).
Random Access Memory (RAM) It directly provides the required information to the processor.
RAM is a volatile memory. It provides temporary storage for data and instructions. RAM is
classified into two categories.
1. Static Random Access Memory (SRAM).
2. Dynamic Random Access Memory(DRAM).
Read only Memory (ROM) It is used for storing standard processing Programs that permanently
reside in the computer. Generally, designers program ROM chips at the time of manufacturing
circuits. ROM is a non-volatile memory. It can only be read not written.
1. Programmable ROM(PROM).
2. Erasable Programmable ROM(EPROM).
3. Electrically Erasable Programmable ROM (EEPROM).
Secondary Memory
Secondary memory, also known as secondary storage or auxiliary memory, is used for storage
data and instructions permanently e.g. hard disks, CDs, DVDs, etc.
Microprocessor
The microprocessor is the controlling element in a computer system and is sometimes referred
to as the chip. Microprocessor is main hardware that drives the computer.
It is a targe Printed Circuit Board (PCB), Which is used in all electronic systems such as
computer, calculator, digital system etc. The speed of CPU depends upon the type of
microprocessor used.
Intel 40004 was the first microprocessor to contain all of the components of a CPU on a
single chip with a 4-bit bus width.
Some of the popular microprocessors are intel, Dual core, Pentium IV, etc.
Parallel Processing
Parallel processing is a term used to denote a large class of techniques that are used to provide
simultaneous data-processing tasks for the purpose of increasing the computational speed of a
computer system.
The purpose of parallel processing is to speed up the computer processing capability and increase its
throughput, that is, the amount of processing that can be accomplished during a given interval of time.
The amount of hardware increases with parallel processing, and with it, the cost of the system
increases.
Parallel processing can be viewed from various levels of complexity. o At the lowest level, we
distinguish between parallel and serial operations by the type of registers used. e.g. shift registers and
registers with parallel load o At a higher level, it can be achieved by having a multiplicity of functional
units that perform identical or different operations simultaneously.
Fig. 4-5 shows one possible way of separating the execution unit into eight functional units operating
in parallel.
o A multifunctional organization is usually associated with a complex control unit to coordinate all the
activities among the various components.
SISD
Represents the organization of a single computer containing a control unit, a processor unit, and a
memory unit.
Instructions are executed sequentially and the system may or may not have internal parallel
processing capabilities.
parallel processing may be achieved by means of multiple functional units or by pipeline processing.
SIMD
Represents an organization that includes many processing units under the supervision of a common
control unit.
All processors receive the same instruction from the control unit but operate on different items of
data.
The shared memory unit must contain multiple modules so that it can communicate with all the
processors simultaneously
MISD structure is only of theoretical interest since no practical system has been constructed using this
organization.
MIMD organization refers to a computer system capable of processing several programs at the same
time. e.g. multiprocessor and multicomputer system
Flynn’s classification depends on the distinction between the performance of the control unit and the
data-processing unit.
It emphasizes the behavioral characteristics of the computer system rather than its operational and
structural interconnections.
One type of parallel processing that does not fit Flynn’s classification is pipelining.
o Vector processing
o Array processing
Instruction set – determines the way that machine language programs are constructed.
Early computers – simple and small instruction set, need to minimize the hardware used.
Many computers – more than 100 or 200 instructions, variety of data types and large number of
addressing modes.
The trend into computer hardware complexity was influenced by various factors:
o Adding instructions that facilitate the translation from high-level language into machine
language programs
o Striving to develop machines that move functions from software implementation into
hardware implementation
A computer with a large number of instructions is classified as a complex instruction set computer
(CISC).
One reason for the trend to provide a complex instruction set is the desire to simplify the compilation
and improve the overall computer performance.
The essential goal of CISC architecture is to attempt to provide a single machine instruction for each
statement that is written in a high-level language.
Examples of CISC architecture are the DEC VAX computer and the IBM 370 computer. Other are 8085,
8086, 80x86 etc.
Some instructions that perform specialized tasks and are used infrequently
Use of microprogram – special program in control memory of a computer to perform the timing and
sequencing of the microoperations – fetch, decode, execute etc.
No large number of registers – single register set of general purpose and low cost
A computer uses fewer instructions with simple constructs so they can be executed much faster within
the CPU without having to use memory as often. It is classified as a reduced instruction set computer
(RISC).
RISC concept – an attempt to reduce the execution cycle by simplifying the instruction set
Small set of instructions – mostly register to register operations and simple load/store operations for
memory access
Each operand – brought into register using load instruction, computations are done among data in
registers and results transferred to memory using store instruction
Compiler support for efficient translation of high-level language programs into machine language
programs
Studies that show improved performance for RISC architecture do not differentiate between the
effects of the reduced instruction set and the effects of a large register file.
A large number of registers in the processing unit are sometimes associated with RISC processors.
RISC uses much less chip space; extra functions like memory management unit or floating point
arithmetic unit can also be placed on same chip. Smaller chips allow a semiconductor mfg. to place more
parts on a single silicon wafer, which can lower the per chip cost dramatically.
RISC processors are simpler than corresponding CISC processors, they can be designed more quickly.
Comparison between RISC and CISC Architectures
cluster
Cluster is a set of loosely or tightly connected computers working together as a unified
computing resource that can create the illusion of being one machine. Computer
clusters have each node set to perform the same task, controlled and produced by
software.
Classification of Clusters:
Computer Clusters are arranged together in such a way to support different
purposes from general-purpose business needs such as web-service support to
computation-intensive scientific calculation. Basically, there are three types of
Clusters, they are:
Load-Balancing Cluster – A cluster requires an effective capability for
balancing the load among available computers. In this, cluster nodes share a
computational workload to enhance the overall performance. For example- a
high-performance cluster used for scientific calculation would balance the
load from different algorithms from the web-server cluster, which may just
use a round-robin method by assigning each new request to a different
node. This type of cluster is used on farms of Web servers (web farm).
Fail-Over Clusters – The function of switching applications and data
resources over from a failed system to an alternative system in the cluster is
referred to as fail-over. These types are used to cluster database of critical
mission, mail, file, and application servers
High-Availability Clusters – These are also known as “HA clusters”. They
offer a high probability that all the resources will be in service. If a failure
does occur, such as a system goes down or a disk volume is lost, then the
queries in progress are lost. Any lost query, if retried, will be serviced by a
different computer in the cluster. This type of cluster is widely used in web,
email, news, or FTP servers.
Benefits –
Absolute scalability – It is possible to create a large clusters that beats the power of
even the largest standalone machines. A cluster can have dozens of multiprocessor
machines.
Additional scalability – A cluster is configured in such a way that it is possible to add
new systems to the cluster in small increment. Clusters have the ability to add systems
horizontally. This means that more computers may be added to the clusters to improve
its performance, redundancy and fault tolerance(the ability for a system to continue
working with a malfunctioning of node).
High availability – As we know that each node in a cluster is a standalone computer,
the failure of one node does not mean loss of service. A single node can be taken down
for maintenance, while the rest of the clusters takes on the load of that individual node.
Preferable price/performance – Clusters are usually set up to improve performance
and availability over single computers, while typically being much more cost effective
than single computers of comparable speed or availability.