You are on page 1of 8

UNIT 1

1.1 Types of Computers:


1.1.1 Micro Computer: A personal computer; designed to meet the
computer needs of an individual. Provides access to a wide variety of
computing applications, such as word processing, photo editing, e-mail,
and internet.

1.1.2 Laptops: A portable, compact computer that can run on power


supply or a battery unit. All components are integrated as one compact
unit. It is generally more expensive than a comparable desktop. It is also
called a Notebook
1.1.3 Work Station: Powerful desktop computer designed for specialized
tasks. Generally used for tasks that requires a lot of processing speed.
Can also be an ordinary personal computer attached to a LAN (local area
network)
1.1.4 Supercomputers are very expensive and are employed for
specialized applications that require immense amounts of mathematical
calculations. For example, weather forecasting requires a supercomputer
1.1.5 Main Frame Computers: A very large and expensive computer
capable of supporting hundreds, or even thousands, of users. In the
hierarchy that starts with a simple microprocessor (in watches, for
example) at the bottom and moves to supercomputers at the top,
mainframes are just below supercomputers. In some ways, mainframes
are more powerful than supercomputers because they support more
simultaneous programs. But supercomputers can execute a single
program faster than a mainframe
1.1.6 Hand Held Computers: A hand-held computer is a computer that
is small enough to be held in one's hand. Although extremely convenient
to carry, handheld computers have not replaced notebook computers
because of their small keyboards and screens. Traditional hand-held
computers were PDAs and devices specifically designed to provide PIM
(personal information manager) functions, such as a calendar and address
book. Today PocketPCs, smartphones and tablets are common
consumer devices

1.2 BASIC OPERATIONAL CONCEPTS


1.2.1 Block Diagram of Computer:
Mainly computer system consists of three parts, that are central
processing unit (CPU), Input Devices, and Output Devices. The Central
Processing Unit (CPU) is divided into two parts again: arithmetic logic unit
(ALU) and the control unit (CU).
Input-Output Devices:
The data is entered through input devices such as the keyboard, mouse,
etc. This set of instruction is processed by the CPU after getting the input
by the user, and then the computer system produces the output. The
computer can show the output with the help of output devices to the user,
such as monitor, printer, etc

Fig 1.1 Block Diagram of Computer


Control Unit:
The control unit (CU) controls all the activities or operations which are
performed inside the computer system. When the control unit receives an
instruction set or information, it converts the instruction set to control
signals then; these signals are sent to the central processor for further
processing.
Arithmetic & Logic Unit:
The arithmetic and logical unit is the combinational digital electronic
circuit that can perform arithmetic operations on integer binary numbers.
It presents the arithmetic and logical operation. The outputs of ALU will
change asynchronously in response to the input. The basic arithmetic and
bitwise logic functions are supported by ALU.
Memory Unit:
The information or set of guidelines are stored in the storage unit of the
computer system. The storage unit provides the space to store the data
or instruction of processed data. The information or data is saved or hold
in computer memory or storage device. The data storage is the core
function and fundamental of the computer components.
1.2.2 Operational Concepts:
1. Instructions take a vital role for the proper working of the
computer.
2. An appropriate program consisting of a list of instructions is stored
in the memory so that the tasks can be started.
3. The memory brings the Individual instructions into the processor,
which executes the specified operations.
4. Data which is to be used as operands are moreover also stored in
the memory.
Example:
Add LOCA, R0
 This instruction adds the operand at memory location LOCA to the
operand which will be present in the Register R0.
 The above mentioned example can be written as follows:
Load LOCA, R1
Add R1, R0
 First instruction sends the contents of the memory location LOCA
into processor Register R0, and meanwhile the second instruction
adds the contents of Register R1 and R0 and places the output in
the Register R1.

Fig 1.2 General Register Organisation


 Program Counter : It contains the memory address of next
instruction to be fetched.
 Instruction Register : It holds the instruction which is currently
being executed.
 MDR(Memory Data Register) : It facilities communication with
memory. It contains the data to be written into or read out of the
addressed location.
 MAR(Memory Address Register): It holds the address of the
location that is to be accessed
 There are n general purpose registers that is R0 to Rn-1

1.3 Basic Performance Equation:


CPI: Average number of clock cycles per instruction
IR: number of instructions
CPI = no of Cycles/IR
Since the CPI is often available, the CPU time is
CPUtime = CPI*IC*CCtime
Let T be the processor time required to execute a program that has been
prepared in some high-level language. The compiler generates a machine
language object program that corresponds to the source program.
Assume that complete execution of the program requires the execution of
N machine cycle language instructions. The number N is the actual
number of instruction execution and is not necessarily equal to the
number of machine cycle instructions in the object program. Some
instruction may be executed more than once, which in the case for
instructions inside a program loop others may not be executed all,
depending on the input data used. Suppose that the average number of
basic steps needed to execute one machine cycle instruction is S, where
each basic step is completed in one clock cycle. If clock rate is R cycles
per second, the program execution time is given by
T=N*S/R
1.3 Fixed-Point Representation:
This representation has fixed number of bits for integer part and for
fractional part. For example, if given fixed-point representation is
IIII.FFFF, then you can store minimum value is 0000.0001 and maximum
value is 9999.9999. There are three parts of a fixed-point number
representation: the sign field, integer field, and fractional field.

Fig 1.3 Fixed Point Representation


We can represent these numbers using:

 Signed representation: range from -(2(k-1)-1) to (2(k-1)-1), for k bits.


 1’s complement representation: range from -(2 (k-1)-1) to (2(k-1)-1),
for k bits.
 2’s complementation representation: range from -(2 (k-1)) to (2(k-1)-1),
for k bits.
 2’s complementation representation is preferred in computer
system because of unambiguous property and easier for arithmetic
operations.
Example: Assume number is using 32-bit format which reserve 1 bit for
the sign, 15 bits for the integer part and 16 bits for the fractional part.
Then, -43.625 is represented as following:

Fig 1.4 IEEE Format Representation


Where, 0 is used to represent + and 1 is used to represent.
000000000101011 is 15 bit binary value for decimal 43 and
1010000000000000 is 16 bit binary value for fractional 0.625.
The advantage of using a fixed-point representation is performance and
disadvantage is relatively limited range of values that they can represent.
So, it is usually inadequate for numerical analysis as it does not allow
enough numbers and accuracy. A number whose representation exceeds
32 bits would have to be stored inexactly.

Fig 1.5 IEEE Fixed Point Representation


These are above smallest positive number and largest positive number
which can be store in 32-bit representation as given above format.
Therefore, the smallest positive number is 2 -16 ≈ 0.000015 approximate
and the largest positive number is (215-1)+(1-2-16)=215(1-2-16) =32768,
and gap between these numbers is 2-16.
We can move the radix point either left or right with the help of only
integer field is 1.
1.4 Floating-Point Representation:
This representation does not reserve a specific number of bits for the
integer part or the fractional part. Instead it reserves a certain number of
bits for the number (called the mantissa or significand) and a certain
number of bits to say where within that number the decimal place sits
(called the exponent).
The floating number representation of a number has two part: the first
part represents a signed fixed point number called mantissa. The second
part of designates the position of the decimal (or binary) point and is
called the exponent. The fixed point mantissa may be fraction or an
integer. Floating -point is always interpreted to represent a number in the
following form: Mxre.
Only the mantissa m and the exponent e are physically represented in the
register (including their sign). A floating-point binary number is
represented in a similar manner except that is uses base 2 for the
exponent. A floating-point number is said to be normalized if the most
significant digit of the mantissa is 1.

Fig 1.6 Floating Point Representation


So, actual number is (-1)s(1+m)x2(e-Bias), where s is the sign bit, m is the
mantissa, e is the exponent value, and Bias is the bias number.
Note that signed integers and exponent are represented by either sign
representation, or one’s complement representation, or two’s complement
representation.
The floating point representation is more flexible. Any non-zero number
can be represented in the normalized form of ±(1.b1b2b3 ...)2x2n This is
normalized form of a number x.
Example: Suppose number is using 32-bit format: the 1 bit sign bit, 8
bits for signed exponent, and 23 bits for the fractional part. The leading
bit 1 is not stored (as it is always 1 for a normalized number) and is
referred to as a “hidden bit”.
Then −53.5 is normalized as -53.5=(-110101.1)2=(-1.101011)x25 ,
which is represented as following below,
Where 00000101 is the 8-bit binary value of exponent value +5.
Note that 8-bit exponent field is used to store integer exponents -126 ≤
n ≤ 127.
The smallest normalized positive number that fits into 32 bits is
(1.00000000000000000000000)2x2-126=2-126≈1.18x10-38 , and largest
normalized positive number that fits into 32 bits is
127 24 104 38
(1.11111111111111111111111)2x2 =(2 -1)x2 ≈ 3.40x10 . These
numbers are represented as following below,

The precision of a floating-point format is the number of positions


reserved for binary digits plus one (for the hidden bit). In the examples
considered here the precision is 23+1=24.
The gap between 1 and the next normalized floating-point number is
known as machine epsilon. the gap is (1+2 -23)-1=2-23for above example,
but this is same as the smallest positive floating-point number because of
non-uniform spacing unlike in the fixed-point scenario.
Note that non-terminating binary numbers can be represented in floating
point representation, e.g., 1/3 = (0.010101 ...) 2 cannot be a floating-
point number as its binary representation is non-terminating.
1.4.1 IEEE Floating point Number Representation:
IEEE (Institute of Electrical and Electronics Engineers) has standardized
Floating-Point Representation as following diagram.
So, actual number is (-1)s(1+m)x2(e-Bias), where s is the sign bit, m is the
mantissa, e is the exponent value, and Bias is the bias number. The sign
bit is 0 for positive number and 1 for negative number. Exponents are
represented by or two’s complement representation.
According to IEEE 754 standard, the floating-point number is represented
in following ways:

 Half Precision (16 bit): 1 sign bit, 5 bit exponent, and 10 bit
mantissa
 Single Precision (32 bit): 1 sign bit, 8 bit exponent, and 23 bit
mantissa
 Double Precision (64 bit): 1 sign bit, 11 bit exponent, and 52 bit
mantissa
 Quadruple Precision (128 bit): 1 sign bit, 15 bit exponent, and 112
bit mantissa

1.4.2 Special Value Representation:


There are some special values depended upon different values of the
exponent and mantissa in the IEEE 754 standard.

 All the exponent bits 0 with all mantissa bits 0 represents 0. If sign
bit is 0, then +0, else -0.
 All the exponent bits 1 with all mantissa bits 0 represents infinity. If
sign bit is 0, then +∞, else -∞.
 All the exponent bits 0 and mantissa bits non-zero represents
denormalized number.
 All the exponent bits 1 and mantissa bits non-zero represents error.

You might also like