You are on page 1of 18

1.

List the replacement algorithms and explain FIFO algorithm

In an operating system that uses paging for memory management, a page


replacement algorithm is needed to decide which page needs to be replaced
when new page comes in. Once the cache has been filled, when a new block
is brought into the cache, one of the existing blocks must be replaced.
Least frequently used (LFU):
Replace that block in the set that has experienced the fewest references.
LFU could be implemented by associating a counter with each line.
Least Recently Used (LRU):
Replace that block in the set that has been in the cache longest with no
reference to it.
First-in-first-out (FIFO):
Replace that block in the set that has been in the cache longest. FIFO is
easily implemented as a round-robin or circular buffer technique. This is the
simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of
the queue is selected for removal. 

2. Draw and elaborate cache main memory structure

Cache Memory is a special very high-speed memory. It is used to speed up


and synchronizing with high-speed CPU. Cache memory is costlier than
main memory or disk memory but economical than CPU registers. Cache
memory is an extremely fast memory type that acts as a buffer between
RAM and the CPU. It holds frequently requested data and instructions so
that they are immediately available to the CPU when needed.
Cache memory is used to reduce the average time to access data from the
Main memory. The cache is a smaller and faster memory which stores
copies of the data from frequently used main memory locations. There are
various different independent caches in a CPU, which store instructions and
data.
Cache Performance:
When the processor needs to read or write a location in main memory, it first
checks for a corresponding entry in the cache.
 If the processor finds that the memory location is in the cache,
a cache hit has occurred and data is read from cache
 If the processor does not find the memory location in the cache,
a cache miss has occurred. For a cache miss, the cache allocates
a new entry and copies in data from main memory, then the request
is fulfilled from the contents of the cache.

Hit ratio = hit / (hit + miss) = no. of hits/total accesses

3. List the mapping functions and discuss Direct mapping function

Cache Mapping:
There are three different types of mapping used for the purpose of cache
memory which are as follows: Direct mapping, Associative mapping, and
Set-Associative mapping. These are explained below.

Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main
memory into only one possible cache line. or
In Direct mapping, assign each memory block to a specific line in the cache.
If a line is previously taken up by a memory block when a new block needs to
be loaded, the old block is trashed. An address space is split into two parts
index field and a tag field. The cache is used to store the tag field whereas
the rest is stored in the main memory. Direct mapping`s performance is
directly proportional to the Hit ratio.

i = j modulo m
where
i=cache line number
j= main memory block number
m=number of lines in the cache
For purposes of cache access, each main memory address can be
viewed as consisting of three fields. The least significant w bits
identify a unique word or byte within a block of main memory. In
most contemporary machines, the address is at the byte level. The
remaining s bits specify one of the 2 s blocks of main memory. The
cache logic interprets these s bits as a tag of s-r bits (most
significant portion) and a line field of r bits. This latter field identifies
one of the m=2r lines of the cache.

Associative Mapping –
In this type of mapping, the associative memory is used to store content and
addresses of the memory word. Any block can go into any line of the cache.
This means that the word id bits are used to identify which word in the block
is needed, but the tag becomes all of the remaining bits.

Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the
drawbacks of direct mapping are removed. Set associative addresses the
problem of possible thrashing in the direct mapping method. It does this by
saying that instead of having exactly one line that a block can map to in the
cache, we will group a few lines together creating a set. Then a block in
memory can map to any one of the lines of a specific set..Set-associative
mapping allows that each word that is present in the cache can have two or
more words in the main memory for the same index address. Set associative
cache mapping combines the best of direct and associative cache mapping
techniques.
4. Draw and elaborate cache main read operation

The processor generates the read address (RA) of a word to be read. If the
word is contained in the cache, it is delivered to the processor. Otherwise, the
block containing that word is loaded into the cache, and the word is delivered
to the processor.
In this organization, the cache connects to the processor via data, control, and
address lines. The data and address lines also attach to data and address
buffers, which attach to a system bus from which main memory is reached.
When a cache hit occurs, the data and address buffers are disabled and
communication is only between processor and cache, with no system bus
traffic. When a cache miss occurs, the desired address is loaded onto the
system bus and the data are returned through the data buffer to both the
cache and the processor. In this latter case, for a cache miss, the desired word
is first read into the cache and then transferred from cache to processor. 

5. Represent floating point number 10.75 in single and double precision IEEE standard
format

Decimal value for IEEE 754 = (-1)S×(1+.M)×2E-bias

S = 0 (since it is a positive number)

Exponent:
Exponent = log(10.75)⁄log(2)≈3
(E = exponent + bias = 3 + 127 = 130)
E = 1000 00102

1+.M = 10.75⁄23≈1.34375
.M ≈ 0.34375
Convert 0.34375 to binary (Truncate infinite fraction to 24 bit):
Multiply by 2 for Fraction Part:
0.34375 ×2= 0.6875 ...0
0.6875 ×2= 1.375 ...1
0.375 ×2= 0.75 ...0
0.75 ×2= 1.5 ...1
0.5 ×2= 1 ...1

Fraction Binary: .010112

Fraction binary for single precision mantissa = .01011000000000000000000(23-bit


length only)
= .0101 1000 0000 0000 0000 000

Binary of to IEEE single-precision floating point number:


Sign Exponent Mantissa
0 10000010 0101 1000 0000 0000 0000 000

Hexadecimal digit sequence:


0100 0001 0010 1100 0000 0000 0000 0000
4 1 2 C 0 0 0 0
10.75 to IEEE single-precision floating point number in hexadecimal digit sequence
is 412C0000H.

6. Solve following binary multiplication using Booth’s Algorithm


Multiplicand M=-4
Multiplier Q=7

A Q Q-1 M Log
0000 0111 0 1100 Populate Data
0100 0111 0 1100 A = A - M
0010 0011 1 1100 Shift
0001 0001 1 1100 Shift
0000 1000 1 1100 Shift
1100 1000 1 1100 A = A + M
1110 0100 0 1100 Shift

Ans: -28

7. Elaborate single and double precision format of IEEE 754 standard

SINGLE PRECISION DOUBLE PRECISION

In single precision, 32 bits are used to In double precision, 64 bits are used to
represent floating-point number. represent floating-point number.

It uses 8 bits for exponent. It uses 11 bits for exponent.

In single precision, 23 bits are used for In double precision, 52 bits are used for
mantissa. mantissa.
SINGLE PRECISION DOUBLE PRECISION

Bias number is 127. Bias number is 1023.

Range of numbers in single Range of numbers in double


precision: 2^(-126) to 2^(+127) precision : 2^(-1022) to 2^(+1023)

This is used where precision matters This is used where precision matters
less. more.

It is used for minimization of


It is used for wide representation. approximation.

It is used in simple programs like It is used in complex programs like


games. scientific calculator.

This is called binary32. This is called binary64.

8. Solve following binary division using Restoring and restoring algorithm


Dividend M=7
Divisor Q=3

Solution:
Dividend = 7
Divisor = 3
First the registers are initialized with corresponding values (Q = Dividend, M = Divisor, A = 0, n =
number of bits in dividend)

n M A Q Operation
3 0011 0000 111 initialize
3 0011 0001 11_ shift left AQ
0011 1110 11_ A=A-M
0011 0001 110 Q[0]=0 And restore A
2 0011 0011 10_ shift left AQ
0011 0000 10_ A=A-M
0011 0000 101 Q[0]=1
1 0011 0001 01_ shift left AQ
0011 1110 01_ A=A-M
0011 0001 010 Q[0]=0 And restore A

register Q contain the quotient 2 and register A contain remainder 1

9. Draw and Elaborate the Program Status Word (PSW).

Many processor designs include a register or set of registers, often known as


the program status word (PSW), that contain status information.
The PSW typically contains condition codes plus other status information.
Common fields or flags include the following:
Sign: Contains the sign bit of the result of the last arithmetic operation.
Zero: Set when the result is 0.
Carry: Set if an operation resulted in a carry (addition) into or borrow
(subtraction) out of a high-order bit. Used for multiword arithmetic operations.
Equal: Set if a logical compare result is equality.
Overflow: Used to indicate arithmetic overflow.
Interrupt Enable/Disable: Used to enable or disable interrupts.
Supervisor: Indicates whether the processor is executing in supervisor or user
mode.

10.Describe data flow fetch cycle with diagram

Data Flow
The exact sequence of events during an instruction cycle depends on the
design of the processor.
Let us assume that a processor that employs a memory address register (MAR),
a memory buffer register (MBR), a program counter (PC), and an instruction
register (IR).
During the fetch cycle, an instruction is read from memory.
The PC contains the address of the next instruction to be fetched.
This address is moved to the MAR and placed on the address bus.
The control unit requests a memory read, and the result is placed on the data
bus and copied into the MBR and then moved to the IR.
Meanwhile, the PC is incremented by 1, preparatory for the next fetch.

11. Design data flow indirect cycle and data flow interrupt cycle

Once the fetch cycle is over, the control unit examines the contents of the IR to
determine if it contains an operand specifier using indirect addressing. If so, an
indirect cycle is performed. The right-most N bits of the MBR, which contain
the address reference, are transferred to the MAR. Then the control unit
requests a memory read, to get the desired address of the operand into the
MBR. The fetch and indirect cycles are simple and predictable. The execute
cycle takes many forms; the form depends on which of the various machine
instructions is in the IR. This cycle may involve transferring data among
registers, read or write from memory or I/O, and/or the invocation of the ALU.

Like the fetch and indirect cycles, the interrupt cycle is simple and predictable.
The current contents of the PC must be saved so that the processor can
resume normal activity after the interrupt. Thus, the contents of the PC are
transferred to the MBR to be written into memory. The special memory
location reserved for this purpose is loaded into the MAR from the control
unit. It might, for example, be a stack pointer. The PC is loaded with the
address of the interrupt routine. As a result, the next instruction cycle will
begin by fetching the appropriate instruction.
12. Write in detail Data Hazards.

Data hazards: A data hazard occurs when there is a conflict in the access of an
operand location.
In general terms, we can state the hazard in this form: Two instructions in a
program are to be executed in sequence and both access a particular memory
or register operand.
If the two instructions are executed in strict sequence, no problem occurs.
However, if the instructions are executed in a pipeline, then it is possible for
the operand value to be updated in such a way as to produce a different result
than would occur with strict sequential execution.
In other words, the program produces an incorrect result because of the use of
pipelining.
As an example, consider the following x86 machine instruction sequence:
ADD EAX, EBX // EAX = EAX + EBX
SUB ECX, EAX // ECX = ECX – EAX
The first instruction adds the contents of the 32-bit registers EAX and EBX and
stores the result in EAX. The second instruction subtracts the contents of EAX
from ECX and stores the result in ECX.
13. Discuss cache read operation in detail.
Same as question 5
14.Draw and elaborate Memory Hierarchy diagram

The memory in a computer can be divided into five hierarchies based on the
speed as well as use. The processor can move from one level to another based
on its requirements. The five hierarchies in the memory are registers, cache,
main memory, magnetic discs, and magnetic tapes. The first three hierarchies
are volatile memories which mean when there is no power, and then
automatically they lose their stored data. Whereas the last two hierarchies are
not volatile which means they store the data permanently.
The memory hierarchy design in a computer system mainly includes different
storage devices. Most of the computers were inbuilt with extra storage to run
more powerfully beyond the main memory capacity. The following memory
hierarchy diagram is a hierarchical pyramid for computer memory. The
designing of the memory hierarchy is divided into two types such as primary
(Internal) memory and secondary (External) memory.
Characteristics of Memory Hierarchy:
Ability: The ability of the memory hierarchy is the total amount of data the
memory can store. Because whenever we shift from top to bottom inside the
memory hierarchy, then the capacity will increase.
Access Time: The access time in the memory hierarchy is the interval of the
time among the data availability as well as request to read or write. Because
whenever we shift from top to bottom inside the memory hierarchy, then the
access time will increase
Cost per bit: When we shift from bottom to top inside the memory hierarchy,
then the cost for each bit will increase which means an internal Memory is
expensive compared with external memory.
15. List the replacement algorithms and explain LRU algorithm

In an operating system that uses paging for memory management, a page


replacement algorithm is needed to decide which page needs to be replaced
when new page comes in. Once the cache has been filled, when a new block
is brought into the cache, one of the existing blocks must be replaced.
First-in-first-out (FIFO):
Replace that block in the set that has been in the cache longest. FIFO is
easily implemented as a round-robin or circular buffer technique.
Least frequently used (LFU):
Replace that block in the set that has experienced the fewest references.
LFU could be implemented by associating a counter with each line.
Least Recently Used (LRU):
Replace that block in the set that has been in the cache longest with no
reference to it. Each line includes a USE bit. When a line is referenced, its
USE bit is set to 1 and the USE bit of the other line in that set is set to 0.
When a block is to be read into the set, the line whose USE bit is 0 is used.
Because we are assuming that more recently used memory locations are
more likely to be referenced, LRU should give the best hit ratio. The cache
mechanism maintains a separate list of indexes to all the lines in the cache.
When a line is referenced, it moves to the front of the list. For replacement,
the line at the back of the list is used. Because of its simplicity of
implementation, LRU is the most popular replacement algorithm.
16.Draw and explain I/O Module structure
In addition to the processor and a set of memory modules, the third key
element of a computer system is a set of I/O modules. Each module
interfaces to the system bus or central switch and controls one or more
peripheral devices. An I/O module is not simply a set of mechanical
connectors that wire a device into the system bus.

The interface to the I/O module is in the form of control, data, and status
signals. Control signals determine the function that the device will perform,
such as send data to the I/O module (INPUT or READ), accept data from the I/O
module (OUTPUT or WRITE), report status, or perform some control function
particular to the device (e.g. position a disk head). Data are in the form of a set
of bits to be sent to or received from the I/O module. Status signals indicate
the state of the device. Examples are READY/NOT- READY to show whether the
device is ready for data transfer. Control logic associated with the device
controls the device’s operation in response to direction from the I/O module.
The transducer converts data from electrical to other forms of energy during
output and from other forms to electrical during input. Typically, a buffer is
associated with the transducer to temporarily hold data being transferred
between the I/O module and the external environment. A buffer size of 8 to 16
bits is common for serial devices, whereas block- oriented devices such as disk
drive controllers may have much larger buffers.
17.Elaborate floating point number representation in IEEE standard

IEEE developed the IEEE 754 floating-point standard. This standard defines set
formats and operation modes. All computers conforming to this standard
would always calculate the same result for the same computation. This
standard does not specify arithmetic procedures and hardware to be used to
perform computations. For example, a CPU can meet the standard whether it
uses shift-add hardware or the Wallace tree to multiply two significant. The
IEEE 754 standard specifies two precisions for floating-point numbers. Single
precision numbers have 32 bits − 1 for the sign, 8 for the exponent, and 23 for
the significand. The significand also includes an implied 1 to the left of its radix
point.
Double precision numbers use 64 bits − 1 for the sign, 11 for the exponent, and
52 for the significand. As in single precision, the significand has an implied
leading 1 for most values. The exponent has a bias of 1023 and a range value
from -1022 to +1023. The smallest and largest exponent values, -1023 and
+1024 are reserved for special numbers.
18.Solve following multiplication using Booth’s Algorithm
Multiplicand M= -5
Multiplier Q=4

A Q Q-1 M Log
0000 0100 0 1011 Populate Data
0000 0010 0 1011 Shift
0000 0001 0 1011 Shift
0101 0001 0 1011 A = A - M
0010 1000 1 1011 Shift
1101 1000 1 1011 A = A + M
1110 1100 0 1011 Shift

Answer = -20
19.Represent floating point number 85.125 in single and double precision IEEE
standard format.
85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0
1. Single precision:
biased exponent 127+6=133
133 = 10000101
Normalised mantissa = 010101001
we will add 0's to complete the 23 bits

The IEEE 754 Single precision is:


= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000

2. Double precision:
biased exponent 1023+6=1029
1029 = 10000000101
Normalised mantissa = 010101001
we will add 0's to complete the 52 bits

The IEEE 754 Double precision is:


= 0 10000000101
0101010010000000000000000000000000000000000000000000
This can be written in hexadecimal form 4055480000000000

20.Solve following binary division using Restoring and restoring algorithm


Dividend M=10
Divisor Q=3

Dividend = 10
Divisor = 3
First the registers are initialized with corresponding values (Q = Dividend, M = Divisor, A = 0, n =
number of bits in dividend)

n M A Q Operation
0001 0000
4 1010 initialize
1 0
0001 0000
4 010_ shift left AQ
1 1
0001 1111
010_ A=A-M
1 0
0001 0000
0100 Q[0]=0 And restore A
1 1
0001 0001
3 100_ shift left AQ
1 0
0001 1111
100_ A=A-M
1 1
0001 0001
1000 Q[0]=0 And restore A
1 0
0001 0010
2 000_ shift left AQ
1 1
0001 0001
000_ A=A-M
1 0
0001 0001
0001 Q[0]=1
1 0
0001 0010
1 001_ shift left AQ
1 0
0001 0000
001_ A=A-M
1 1
0001 0000
0011 Q[0]=1
1 1

register Q contain the quotient 3 and register A contain remainder 1

21.Elaborate the Program Status Word (PSW).

Same as question number 9


22. Elaborate different types of User Visible Registers.

User-Visible Registers. A user-visible register is one that may be referenced by


means of the machine language that the processor executes and that is
generally available to all programs, including application programs as well as
system programs. The following types of registers are typically available: data,
address, and condition codes. Data registers can be assigned to a variety of
functions by the programmer. In some cases, they are general purpose in
nature and can be used with any machine instruction that performs operations
on data. Often, however, there are restrictions. For example, there may be
dedicated registers for floating-point operations. Address registers contain
main memory addresses of data and instructions, or they contain a portion
of the address that is used in the calculation of the complete address. These
registers may themselves be somewhat general purpose, or they may be
devoted to a particular addressing mode.
Examples include:
• Index register: Indexed addressing is a common mode of addressing that
involves adding an index to a base value to get the effective address.
• Segment pointer: With segmented addressing, memory is divided into
variable-length blocks of words called segments. A memory reference consists
of a reference to a particular segment and an offset within the segment; this
mode of addressing is important in memory management
23.Design data flow indirect cycle and data flow interrupt cycle
• Stack pointer: If there is user-visible stack addressing, then typically the stack
is in main memory and there is a dedicated register that points to the top of
the stack. This allows the use of instructions that contain no address field, such
as push and pop.

Same as question number 11.


24.Describe in detail Resource Hazards.

A resource hazard occurs when two (or more) instruction that are already in
the pipeline need the same resource. The result is that the instructions must
be executed in serial rather than parallel for a portion of the pipeline. A
resource hazard is sometime referred to as a structural hazard.
Design the Internal Structure of the CPU

You might also like