You are on page 1of 31

COMPUTER ORGANISATION

IMPORTANT ANSWERS FOR


SEM EXAMS
All Units Notes

B TAGORE HARI RAVINDRA;17075A1203


What is MAR register?
ANS: In a computer, the Memory Address Register (MAR) is the CPU register that either stores the memory
address from which data will be fetched from the CPU or the address to which data will be sent and stored. In
other words, MAR holds the memory location of data that needs to be accessed.

What is MDR computing?


ANS: The Memory Data Register (MDR) or Memory Buffer Register (MBR) is the register of a computer's
control unit that contains the data to be stored in the computer storage (e.g. RAM), or the data after a fetch
from the computer storage.

What is Program Counter?


ANS: The Processor Register that stores the Address of the next instruction that needs to be Executed is called
Program Counter

What is Instruction Register(IR)?


ANS: The IR Instruction register is the register that stores the instruction that are to be performed on the
operand

What is Address Register(AR)?


ANS: The AR Address Register is the register that is used to store the address of the instruction that is under
execution is called the address register

What is Data Register(DR)?


ANS: The DR Data Register is the register that is used to store the operands (DATA) on which the execution
needs to be done is called the DATA Register

What is Instruction Code?


ANS: A set of instructions that specify the operations, operands, and the sequence by which processing has to
occur. An instruction code is a group of bits that tells the computer to perform a specific operation part. In other
words, it is the combination of Opcode and Operands.

A computer instruction is often divided into two parts


–An opcode (Operation Code) that specifies the operation for that instruction
–An address that specifies the registers and/or locations in memory to use for that operation
•In the Basic Computer, since the memory contains 4096 (= 212) words, we need 12 bit to specify which
memory address this instruction will use
•In the Basic Computer, bit 15 of the instruction specifies the addressing mode (0: direct addressing, 1: indirect
addressing)
•Since the memory words, and hence the instructions, are 16 bits long, that leaves 3 bits for the instruction’s

15 14 12 11 0
| I | Opcode | Address |  Instruction Format

Addressing Mode
What are Instruction Formats?
ANS:
 The Basic computer has 3 Instruction code formats
 Each has 16 bits
 The OPCODE part contains of 3 bits and the meaning of the remaining 13 bits depend on the OPCODE
 A memory reference instruction uses 12 bits to specify the address and 1 bit to specify the addressing
mode I
 I is equal to 0 for Direct addressing and 1 for Indirect addressing.
 The Register reference instructions are recognized by the opcode 111 with 0 in the Left Most Bit (bit
15) of the instruction
 a Register reference instruction specifies an operation on AC register.
 The other 12 bits are used to specify the operations to be executed
 An input output instruction does not need a reference to memory and is recognized by the operation
code 111 with a 1 in the Left Most Bit (bit 15) of the instruction
 The other 12 bits are used to specify the type of Input-Output operations performed
 If the 3 OPCODE bits in positions 12 through 14 are not equal to 111, the instruction is a memory
reference type of instruction and the bit position 15 is taken as addressing mode I
 If the 3-bit OPCODE is equal to 111, control then inspects the bit in position 15.
 If the bit is 0, the instruction is Register Reference
 If the bit is 1, the instruction is Input-Output Referenced

15 14 12 11 0

(OPCODE=000 through 111)


Memory Reference Instruction

15 12 11 0

(OPCODE=111, I=0)
Register Reference Instruction

15 12 11 0

(OPCODE=111, I=1)
Input-Output Instruction
Explain 4-Bit Bus structure with Diagram?
ANS:

4-Bit Bus Structure

S0 S1 Register
Selected

0 0 A

0 1 B

1 0 C

1 1 D

Function Table

 When the 2 selection lines value is 0,0 then the 0th input of all multiplexers are selected and the content
of the 0th pin is transferred to the output of multiplexers, for multiplexer 0, the 4 inputs are selected from
4 Registers (A0, B0, C0, D0)
 As we don’t have values for multiplexers 3,2,1,0th inputs from register D, B, C we have register A.so the
content of register A is transferred to the output of multiplexer 0
 If S1, S0 values are (0,1), the first input of multiplexers gets selected, content of register B is transferred
to output using mux-1 and is similar with remaining values
Explain INTEL’s Evolution of X86 Architecture?
ANS:

The x86 incorporatesthe sophisticated design principles once found only on mainframes and Super Computers
and serves as an excellent example of CISC design. An alternative approach to processor design in the reduced
instruction set computer (RISC). The ARM Architecture is used in a wide variety of embedded systems and is
oe of the most powerful and best-designed RISC-based systems on the market.

8080:
The world’s first general-purpose microprocessor. This was an 8-bit machine, with an 8-bit data path to
memory. The 8080 was used in the first personal computer, the Altair.

8086:
A far more powerful, 16-bit machine. In addition to a wider data path and larger registers, the 8086 sported an
instruction cache, or queue, that pre-fetches a few instructions before they are executed. A variant of this
processor, the 8088, was used in IBM’s first personal computer, securing the success of Intel. The 8086 is the
first appearance of the x86 architecture.

80286:
This extension of the 8086 enabled addressing a 16-MByte memory instead of just 1 M-Byte.

80386:
Intel’s first 32-bit machine. With a 32-bit architecture,the 80386 rivaled the complexity and power of
minicomputers and mainframes introduced just a few years earlier.
This was the first Intel processor to support multitasking, meaning it could run multiple programs at the
same time.

80486:
The 80486 introduced the use of much more sophisticated and powerful cache technology and sophisticated
instruction pipelining. The 80486 also offered a built-in math coprocessor, offloading complex math operations
from
the main CPU.

Pentium:
With the Pentium, Intel introduced the use of superscalar techniques, which allow multiple instructions to
execute in parallel.

Pentium Pro:
The Pentium Pro continued the move into superscalar organization begun with the Pentium, with aggressive use
of register renaming, branch prediction, data flow analysis, and speculative execution.

Pentium II:
The Pentium II incorporated Intel MMX technology, which is designed specifically to process video, audio, and
graphics data efficiently

Pentium III:
The Pentium III incorporates additional floating-point instructions to support 3D graphics software.
Pentium 4:
The Pentium 4 includes additional floating-point and other enhancements for multimedia.

Core:
This is the first Intel x86 microprocessor with a dual core, referring to the implementation of two processors on
a single chip.

Core 2:
The Core 2 extends the architecture to 64 bits. The Core 2 Quad provides four processors on a single chip

List and Explain Various Logic Micro Operations?


ANS: Various Logic Micro Operations are
1.Selective Set
2.Selectice Complement
3.Selective Clear
4.Mask
5.Insert
6.Clear

SELECTIVE SET: The selective set Logic Micro Operation sets to 1, the bits in Register A Where there are
corresponding 1’s in Register B. It doesn’t affect the Bit positions that have 0’s in B
It is Similar to Logical “OR” Operation i.e.(A+B)
E.g.
A 1010
B 1100

Then A becomes 1110

SELECTIVE COMPLEMENT: The Selective Complement Micro Operation complements bits in register A
where there are corresponding 1’s in Register B. It doesn’t affect the bit positions that have 0’s in B
It is Similar to Logical “XOR” Operation i.e.(A  B)
E.g.
A 1010
B 1100

Then A becomes 0110

SELECTIVE CLEAR: The Selective Clear Micro Operation Clears the 0’th Bits in Reg A where there re
corresponding 1’s in Reg B, it doesn’t affect the bit positions that have 0’s in B.
It is Similar to Logical (A.B’)
E.g.
A 1010
B 1100

Then A becomes 0010


MASK: The Mask Logical Micro Operation is similar to selective clear Except that the bits of A are cleared to
0 where there are corresponding 0’s in B
It is Similar to Logical AND Operation (A.B)
E.g.
A 1010
B 1100
Then A becomes 1000

CLEAR: The Clear Micro Operation compares the words in A and B and produces an all zero result if the
values present in the both the registers are equal this is achieved by XOR operation
It is Similar to Logical “XOR” Operation i.e.(A  B)
E.g.
A 1010
B 1010

Then A becomes 0000

INSERT: This operation inserts a new value into a Group of bits, this is done by first Masking the bits and
Doing an OR operation with the required value

E.g.
Question: A 0110 1010 Right Mask 4 bits with 1111
Then A 0110 1010
Masking 1111 0000
A 0110 0000
Adding 1111 0000 1111
Result A 0110 1111
Explain Arithmetic Micro Operations?

ANS: The Arithmetic Micro operations are

1. Addition
2. Subtraction
3. Increment
4. Decrement
And additional Arithmetic Micro operations are
1. Add with carry
2. Subtract with borrow
3. Load / transfer
4-Bit Arithmetic Circuit
Explain Logic Micro Operations?
ANS:
►Specify binary operations on the strings of bits in registers
►Logic micro operations are bit-wise operations, i.e., they work on the individual bits of data
►useful for bit manipulations on binary data
►useful for making logical decisions based on the bit value
►There are, in principle, 16 different logic functions that can be defined over two binary input variables
►most systems only implement four of these
1. AND ()
2. OR ()
3. XOR ()
4. Complement/NOT
Truth tables for 16 functions of 2 variables and the corresponding 16 logic Micro-Operations
Explain Arithmetic Logic Shift Unit (One Stage)?
ANS:
List Various Basic Computer Registers with a Diagram?
ANS:
List and Explain Various Addressing Modes?
ANS:
ADDRESSING MODES: The Different ways in which the location of an operand is specified in an instruction
is called as an ADDRESSING MODE
There are different kinds of Addressing Modes
1.Implied Addressing mode:
►In this mode the operands are specified in the definition of the instruction itself
►All Register reference instructions that use accumulator are implied mode instructions
►Zero Address instructions in a stack organized computer are implied mode instructions
►E.g. CMP AC
It is an implied mode instruction because the operand in the accumulator register is implied in the
definition of the instruction
2.Immediate Addressing Mode:
►Operand is given explicitly in the instruction
►no memory reference to fetch data
►Fast instruction
E.g. ADD 5
Here ADD is the opcode and 5 is the operand, which is supplied with the instruction itself
3.Register Addressing Mode:
►Operand is Held in the Register named in the Address field
►Effective Address (EA) = R
►Fetches the instruction faster
►No Memory Access
►Very fast Execution
Register Addressing Diagram

OPCODE Register address R


Instruction
Operand

4.Register Indirect Addressing Mode:


►In this mode the instruction specified a Register in CPU whose contents give the address of the operand in
memory
►Effective Address (EA) = [R]
►Operands in memory cell pointed to by contents of Register R
OPCODE Register address R

Pointer to
operand Operand
5.Auto Increment Addressing Mode:
►This is similar to the register indirect mode except that the register is incremented after its value is used to
access memory

6.Auto Decrement Addressing Mode:


►This is similar to the register indirect mode except that the register is Decremented after its value is used to
access memory
7.Direct Addressing Mode:
►In this mode the EA is equal to the address part of the instruction
►The operand resides in memory and its address is giver directly by the address field of the instruction

Direct Addressing Diagram

OPCODE Register address R


Instruction Memory

Operand

7.INDirect Addressing Mode:


►in this mode the address field of the instruction gives the address where the EA is stored in the memory

Indirect Addressing Diagram

OPCODE Register address R


Instruction Memory

Pointer to
operand

Operand
7.Relative Addressing Mode:
►In this mode the content of the PC is added to the address part of the instruction to obtain the EA
►I.e. EA (Effective Address) = Address part of the instruction + content of the Program Counter

8.Indexed Addressing Mode:


►In this mode the content of the Index Register is added to the address part of the instruction to obtain the EA
►I.e. EA (Effective Address) = Address part of the instruction + content of the Index Register

9.Base Register Addressing Mode:


►In this mode the content of the Base Register is added to the address part of the instruction to obtain the EA
►I.e. EA (Effective Address) = Address part of the instruction + content of the Base Register
What are Various Computer Instructions?
ANS:
Characteristics of RISC and CISC?
ANS:
Features of RISC Processors:
1. RISC processors use a small and limited number of instructions: RISC processors only support a small
number of primitive and essential instructions. This puts emphasis on software and compiler design due to the
relatively simple instruction set.

2. RISC machines mostly uses hardwired control unit: Most of the RISC processors are based on the
hardwired control unit design approach. In hardwired control unit, the control units use fixed logic circuits to
interpret instructions and generate control signals from them. It is significantly faster than its counterpart but are
rather inflexible.

3. RISC processors consume less power and have high performance: RISC processors have been known to
be heavily pipelined this ensures that the hardware resources of the processor are utilized to a maximum giving
higher throughput and also consuming less power.

4. Each instruction is very simple and consistent: Most instructions in a RISC instruction set are very simple
that get executed in one clock cycle.

5. RISC processors use simple addressing modes: RISC processors don’t have as many addressing modes
and the addressing modes these processors have are rather very simple. Most of the addressing modes are for
register operations and do not refer memory.

6. RISC instruction is of uniform fixed length: The decision of RISC processor designers to provide simple
addressing modes leads to uniform length instructions. For example, instruction length increases if an operand
is in memory as opposed to in a register. a. This is because we have to specify the memory address as part of
instruction encoding, which takes many more bits. This complicates instruction decoding and scheduling.

7. Large Number of Registers: The RISC design philosophy generally incorporates a larger number of
registers to prevent in large amounts of interactions with memory

Features of CISC Processors:

1. CISC chips have complex instructions: A CISC processor would come prepared with a specific instruction
(call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the
operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of
multiplying two numbers (2, 3) can be completed with one instruction: MULT 2, 3
MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and
does not require the programmer to explicitly call any loading or storing functions. It closely resembles a
command in a higher level language.

2. CISC processors have a variety of instructions: There are a variety of instructions many of which are
complex and thus make up for smaller assembly code thus leading to very low RAM consumption.

3. CISC machines generally make use of complex addressing modes: CISC processes have a variety of
different addressing modes in which the operands can be addressed from the memory as well as located in the
different registers of the CPU.
There are many instructions that refer memory as opposed to RISC architecture
4. CISC processors have variable length instructions: The decision of CISC processor designers to provide a
variety of addressing modes leads to variable-length instructions. For example, instruction length increases if an
operand is in memory as opposed to in a register.

5. Easier compiler design: Compilers have very little to do when executing on a computer having CISC
architecture. The complex instruction set and smaller assembly code meant little work for the compiler and thus
eased up compiler design

6. CISC machines uses micro-program control unit: CISC uses micro programmed control unit. These
systems consist of micro programs which are nothing but series of microinstructions, which control the CPU at
a very fundamental level of hardware circuitry. This is then stored in a control memory like ROM from where
the CPU accesses them and generates control signals.

7. CISC processors are having limited number of registers: CISC processors normally only have a single set
of registers. Since the addressing modes give provisions for memory operands, limited number of “costly”
register memory is sufficient for the functions.

Draw and Explain Instruction Cycle?


ANS:
In Basic Computer, a machine instruction is executed in the following cycle:
1 Fetch an instruction from memory
2 Decode the instruction
3 Read the effective address from memory if the instruction has an indirect address
4 Execute the instruction
5 After an instruction is executed, the cycle starts again at step 1, for the next instruction
Draw and Explain Common Bus Architecture?
ANS:

• Three control lines, S2, S1,


and S0 control which register
the bus selects as its input
• Either one of the registers
will have its load signal
activated, or the memory
will have its read signal
activated

List Register Reference Instructions?


ANS:
CLA ------ Clear AC
CLE ------ Clear E
CMA ------ Complement AC
CME ------ Complement E
CIR ------ Circulate right AC and E
CIL ------ Circulate left AC and E
INC ------ Increment AC
SPA ------ Skip next instr. if AC is positive
SNA ------ Skip next instr. if AC is negative
SZA ------ Skip next instr. if AC is zero
SZE ------ Skip next instr. if E is zero
HLT ------ Halt computer

List Memory Reference Instructions?


ANS:

AND ------ AND memory word to AC BSA ------ Branch and save return address
ADD ------ Add memory word to AC ISZ ------ Increment and skip if zero
LDA ------ Load AC from memory
STA ------ Store content of AC into memory
BUN ------ Branch unconditionally
What are Instruction Formats, Explain?
ANS:

Mode Opcode Field Address field ←-- Instruction Format

OP-code field - specifies the operation to be performed


Address field - designates memory address(es) or a processor register(s)
Mode field - determines how the address field is to be interpreted (to get EA or the operand)

►The number of address fields in the instruction format depends on the internal organization of CPU
Basically the Instruction Formats are
1. 3 Address Instruction
2. 2 Address Instruction
3. 1 Address Instruction
4. 0 Address Instruction
Three Address Instruction:
Program to evaluate X = (A + B) * (C + D)
ADD R1, A, B
ADD R2, C, D
MUL X, R1, R2
Results in short programs, Instruction becomes long (many bits)

Two-Address Instructions:
Program to evaluate X = (A + B) * (C + D)
MOV R1, A
ADD R1, B
MOV R2, C
ADD R2, D
MUL R1, R2
MOV X, R1
One-Address Instructions:
Program to evaluate X = (A + B) * (C + D)
LOAD A
ADD B
STORE T
LOAD C
ADD D
MUL T
STORE X
Uses an implied AC register for all data manipulation
Zero Address Instruction:
Program to evaluate X = (A + B) * (C + D)
PUSH A
PUSH B
ADD
PUSH C
PUSH D
ADD
MUL
POP X
Explain Memory Hierarchy in computer?
ANS:
 Memory unit is essential component of
digital computer since it is needed for
storing programs and data.
 Memory unit that communicates directly
with CPU is called Main memory.
 Devices that provide backup storage is
called auxiliary memory.
 Only programs and data currently needed
by processor reside in the main memory.
 All other information is stored in auxiliary
memory and transferred to main memory
when needed.
 Memory hierarchy system consist of all
storage devices from auxiliary memory to
main memory to cache memory
 As one goes down the hierarchy:
a. Cost per bit decreases.
b. Capacity increases.
c. Access time increases.
d. Frequency of access by the processor decreases.
What is MAIN MEMORY, Explain?
ANS:
 It is the memory used to store programs and data during the computer operation.
 The principal technology is based on semiconductor integrated circuits.
 It consists of RAM and ROM chips.
 RAM chips are available in two form static and dynamic.
 ROM is uses random access method.
 It is used for storing programs that are permanent and the tables of constants that do not change.
 ROM store program called bootstrap loader whose function is to start the computer software when the
power is turned on.
 When the power is turned on, the hardware of the computer sets the program counter to the first address
of the bootstrap loader.
Differentiate SRAM & DRAM
SRAM DRAM

Uses capacitor for storing information Uses Flip flop

More cells per unit area due to smaller cell size. Needs more space for same capacity

Cheap and smaller in size Expensive and bigger in size

Slower and analog device Faster and digital device

Requires refresh circuit No refresh circuit is needed

Used in main memory Used in cache


Explain Characteristics of memory?
ANS:
LOCATION : Internal (Processor Register, Main memory, Cache), External (Optical Drive, Magnetic Tapes)
CAPACITY : no of word, no of bytes, MB, GB, TB
UNIT OF TRANSFER: Word, Block
ACCESS METHOD: Sequential, Direct, Random, Associative
PERFORMANCE: Access time, cycle time, Access rate
PHYSICAL TYPE: Semi-conductor, Magnetic, optical, Magneto Optical
PHYSICAL CHARACHTERISTICS: Volatile, Non Volatile
ORGANISATION: Memory Modules

Explain Cache Memory?


ANS:
 If the active partitions of the program and data are placed in a fast small memory the average memory
access time can be reduced
 Thus reducing the total executing time of program
 Such a fast memory is referred to as CACHE Memory
 Cache is the fastest component in the memory Hierarchy and approaches the speed of the CPU
component
 When CPU needs to access memory, the cache is examined
 If the word is found in the cache, it is read from the cache memory
 If the word addressed by the CPU is not found in the cache, then main memory is accessed to read the
word
 A block of words containing the one just accessed is then transferred from main memory to cache
memory
 If the cache is full, then a block equivalent to the size of the used word is replaced according to the
replacement algorithm being used

What is HIT Ratio?

 When the CPU refers to memory and finds the word in the Cache it is said to be the HIT, if not then it’s
a miss
 The performance of cache memory is frequently measured in terms of quantity called HIT Ratio
 Hit Ratio = Hit/(Hit+Miss)
What is Associative Mapping? Set associative Mapping? Direct
Mapping?

Associative Mapping:
 Here the 15 Bit address is loaded as 5-digit octal number and 12-bit
data word is loaded as 4-digit octal number into the argument
registers
 The 15 Bit CPU address is found in the argument register
 The corresponding data word will be loaded into cache memory and
CPU reads the data from the Cache Memory
 If it is not found, then it permits any location in cache memory to
store a word from main memory

Direct Mapping:
 Here the n-Bit memory address is divided in to 2 Fields
1. K-Bits for the Index
2. N-K Bits for the Tag
 Here for a single index value there can exists a single tag value

Set Associative Mapping:


 The Disadvantage of the Direct Mapping is that 2 Words with the same index value in their address but
with different Tag Values can’t reside in the Cache memory at the same time
 So the Set Associative Mapping is an improvement over direct mapping, in that each word of cache can
store 2 or more words of memory under the same index Address
 Associative memories are
expensive compared to RAM
because of the added logic
associated with each cell
 The CPU address of 15 bits is
divided into 2 Fields
a. Index
b. Tag
 The 9 LSB Bits constitutes the
index field and remaining 6
bits for Tag Field

 The figure shows the main memory needs the address that include both the Tag and Index bits.
 The no of bits in the index field is equal to no of address bits required to access the cache memory
 The direct mapping cache organization uses n-bit
address to access the main memory and k-bit
index to access the cache memory
 The internal organization of the cache memory is
shown →
 Each word in cache consist of data word and its
associated tag, when a new word is 1st brought
into the cache, the tag bits are stored alongside
the data bits
 When the CPU generates memory request, the
index field is used for address to access the cache
 The tag field of the CPU address is compared
with the tag field of cache memory
 If the 2 tags are matched, then there is a HIT &
desired word is in cache memory
 If 2 tags aren’t matched, then CPU refers to Main
memory.

Explain Virtual Memory?

 In a memory hierarchy system, programs and data are first stored in auxiliary memory.
 Portions of a program or data are brought into main memory as they are needed by the CPU.
 Virtual memory is a concept used in some large computer systems that permit the user to construct
programs as though a large memory space were available, equal to the totality of auxiliary memory.
 Each address that is referenced by the CPU goes
through an address mapping from the so-called
virtual address to a physical address in main
memory.
 Virtual memory is used to give programmers the
illusion that they have a very large memory at their
disposal, even though the computer actually has a
relatively small main memory.
 A virtual memory system provides a mechanism for
translating program-generated addresses into correct
main memory locations.
 This is done dynamically, while programs are being
executed in the CPU.
 The translation or mapping is handled automatically
by the hardware by means of a mapping table.
 An address used by a programmer will be called a
virtual address, and the set of such addresses the address space.
 An address in main memory is called a location or physical address. The set of such locations is called
the memory space.
 Thus the address space is the set of addresses generated by programs as they reference instructions and
data.
 the memory space consists of the actual main memory locations directly addressable for processing.
 In most computers the address and memory spaces are identical.
 The address space is allowed to be larger than the memory space in computers with virtual memory

Explain the Page Replacement Algorithm?


 A virtual memory system is a combination of hardware and software techniques.
 The memory management software system handles all the software operations for the efficient
utilization of memory space. It must decide
1. which page in main memory ought to be removed to make room for a new page
2. when a new page is to be transferred from auxiliary memory to main memory
3. where the page is to be placed in main memory.
 The hardware mapping mechanism and the memory management software together constitute the
architecture of a virtual memory.
 When a program starts execution, one or more pages are transferred into main memory and the page
table is set to indicate their position.
 The program is executed from main memory until it attempts to reference a page that is still in auxiliary
memory.
 This condition is called page fault. When page fault occurs, the execution of the present program is
suspended until the required page is brought into main memory.
 Since loading a page from auxiliary memory to main memory is basically an IO operation, the operating
system assigns this task to the IO processor.
 In the meantime, control is transferred to the next program in memory that is waiting to be processed in
the CPU.
 Later, when the memory block has been assigned and the transfer completed, the original program can
resume its operation.
 When a page fault occurs in a virtual memory system, it signifies that the page referenced by the CPU is
not in main memory.
 A new page is then transferred from auxiliary memory to main memory.
 If main memory is full, it would be necessary to remove a page from a memory block to make room for
the new page.
 The policy for choosing pages to remove is determined from the replacement algorithm that is used.
 The goal of a replacement policy is to try to remove the page least likely to be referenced in the
immediate future.
 Two of the most common replacement algorithms used
a) first-in(FIFO)
b) Least Recently Used(LRU)

Explain FIFO and LRU Algorithms?

FIFO:
 The FIFO algorithm selects for replacement the page that has been in memory the longest time.
 Each time a page is loaded into memory, its identification number is pushed into a FIFO stack.
 FIFO will be full whenever memory has no more empty blocks.
 When a new page must be loaded, the page least recently brought in is removed.
 The page to be removed is easily determined because its identification number is at the top of the FIFO
stack.
 The FIFO replacement policy has the advantage of being easy to implement.
 It has the disadvantage that under certain circumstances pages are removed and loaded from memory too
frequently.

LRU:
 The LRU policy is more difficult to implement but has been more attractive on the assumption that the
least recently used page is a better candidate for removal than the least recently loaded page as in FIFO.
 The LRU algorithm can be implemented by associating a counter with every page that is in main
memory.
 When a page is referenced, its associated counter is set to zero.
 At fixed intervals of time, the counters associated with all pages presently in memory are incremented
by 1.
 The least recently used page is the page with the highest count.
 The counters are often called aging registers, as their count indicates their age, that is, how long ago
their associated pages have been referenced.

Explain Briefly about RAID?


RAID Stands for Redundant array of inexpensive Disks (or) Redundant array of Independent Disks
 RAID works by placing data on multiple disks and allowing input/output (I/O) operations to overlap in a
balanced way, improving performance.
 Because the use of multiple disks increases the mean time between failures (MTBF), storing data
redundantly also increases fault tolerance.
 RAID arrays appear to the operating system (OS) as a single logical hard disk.
 RAID employs the techniques of disk mirroring or disk striping.
 Mirroring copies identical data onto more than one drive.
 Striping partitions each drive's storage space into units ranging from a sector
(512 bytes) up to several megabytes.
 The stripes of all the disks are interleaved and addressed in order.
RAID 0:
 This configuration has striping, but no redundancy of data.
 It offers the best performance, but no fault tolerance.
RAID 1:
 Also known as disk mirroring
 This configuration consists of at least two drives that duplicate the storage of data.
 There is no striping.
 Read performance is improved since either disk can be read at the same time.
 Write performance is the same as for single disk storage.
RAID 2:
 This configuration uses striping across disks, with some disks storing error
checking and correcting (Parity) information.
 It has no advantage over RAID 3 and is no longer used.

RAID 3:
 This technique uses striping and dedicates one drive to storing parity
information.
 The embedded Parity information is used to detect errors.
 Data recovery is done by calculating the XOR of the information
recorded on the other drives.
 Since an I/O operation addresses all the drives at the same time, RAID 3
cannot overlap I/O.
 For this reason, RAID 3 is best for single-user systems with long record
applications.

RAID 4:
 This level uses large stripes, which means you can read records
from any single drive.
 This allows you to use overlapped I/O for read operations.
 Since all write operations have to update the parity drive, no
I/O overlapping is possible.
 RAID 4 offers no advantage over RAID 5.

RAID 5:
 This level is based on block-level striping with parity.
 The parity information is striped across each drive, allowing the array to function even if one drive were
to fail.
 The array's architecture allows read and write operations to span multiple drives.
 This results in performance that is usually better than that
of a single drive, but not as high as that of a RAID 0 array.
 RAID 5 requires at least three disks, but it is often
recommended to use at least five disks for performance
reasons.
 RAID 5 arrays are generally considered to be a poor choice
for use on write-intensive systems because of the
performance impact associated with writing parity
information.
 When a disk does fail, it can take a long time to rebuild a
RAID 5 array.
 Performance is usually degraded during the rebuild time, and the array is vulnerable to an additional
disk failure until the rebuild is complete.
RAID 6:
 This technique is similar to RAID 5, but includes
a second parity scheme that is distributed across
the drives in the array.
 The use of additional parity allows the array to
continue to function even if two disks fail
simultaneously.
 However, this extra protection comes at a cost.
 RAID 6 arrays have a higher cost per gigabyte
(GB) and often have slower write performance
than RAID 5 arrays.

List the benefits of RAID?


 Performance, resiliency and cost are among the major benefits of RAID.
 By putting multiple hard drives together, RAID can improve on the work of a single hard drive and,
depending on how it is configured, can increase computer speed and reliability after a crash.
 With RAID 0, files are split up and distributed across drives that work together on the same file.
 As such, reads and writes can be performed faster than with a single drive.
 RAID 5 arrays break data into sections, but also devote another drive to parity.
 This parity drive can see what is working when one non-parity drive fails, and can figure out what was
on that failed drive.
 This function allows RAID to provide increased availability.
 With mirroring, RAID arrays can have two drives containing the same data, ensuring one will continue
to work if the other fails.
What are Peripheral Devices?

Basically peripheral Devices are of 2 kinds


1. Input Devices
2. Output Devices
Input Devices: Output Devices:
 Keyboard  Card Puncher, Paper Tape Puncher
 Optical input devices  CRT
- Card Reader
 Printer (Impact, Ink Jet, Laser, Dot Matrix)
- Paper Tape Reader
- Bar code reader  Plotter
- Optical Mark Reader  Analog
 Magnetic Input Devices  Voice
- Magnetic Stripe Reader
 Screen Input Devices
- Touch Screen
- Mouse
 Analog Input Devices
Differentiate Isolated I/O to Memory mapped I/O?

Isolated I/O Memory mapped I/O


 Separate I/O read/write control lines in  A single set of read/write control lines(no
addition to memory read/write control lines distinction between memory and I/O transfer)
 Separate (isolated) memory and I/O address  Memory and I/O addresses share the common
spaces address space
 Distinct input and output instructions  reduces memory address range available
 No specific input or output instruction
 The same memory reference instructions can
 be used for I/O transfers
 Considerable flexibility in handling I/O
operations

Differentiate Hard Wired Control and Micro Program Control?


Hardwired Control Microprogrammed Control
Technology is circuit based. Technology is software based.
It is implemented through flip-flops, gates, decoders Microinstructions generate signals to control the
etc. execution of instructions.
Variable instruction format (16-64 bits per
Fixed instruction format.
instruction).
Instructions are register based. Instructions are not register based.
ROM is not used. ROM is used.
It is used in RISC. It is used in CISC.
Faster decoding. Slower decoding.
Difficult to modify. Easily modified.
Chip area is less. Chip area is large.

What is Asynchronous Data Transfer and explain supporting methods?

ASYNCHRONOUS DATA TRANSFER:


 Asynchronous data transfer means that the data transfer is not dependent on any clock pulses
 Asynchronous data transfer between two independent units requires that control signals be transmitted
between the communicating units to indicate the time at which data is being transmitted
 There are 2 kinds of asynchronous data transfer methods
a) Strobe pulse
 A strobe pulse is supplied by one unit to indicate the other unit when the transfer has to occur

b) Handshaking
 A control signal is accompanied with each data being transmitted to indicate the presence of The
receiving unit responds with another control signal to acknowledge receipt of the data
Strobe Control:
* Employs a single control line to time each transfer
* The strobe may be activated by either the source or the destination unit

Problems in Strobe control:


 For source initiated strobe, the source unit that initiates the transfer has no way of knowing if the
destination unit has actually received the data
 For Destination initiated strobe, the Destination unit that initiates the transfer has no way of
knowing if the source has actually placed the data on bus on not
Handshaking
 The Handshaking overcomes the problems of the Strobe signal
 Allows arbitrary delays from one state to the next
 Permits each unit to respond at its own data transfer rate
 The rate of transfer is determined by the slower unit
Source Initiated Handshake Destination Initiated Handshake
Draw and Explain Universal Asynchronous Receiver-Transmitter UART?

 Transmitter Register
 Accepts a data byte(from CPU) through the data bus
 Transferred to a shift register for serial transmission
 Receiver
 Receives serial information into another shift register
 Complete data byte is sent to the receiver register
 Status Register Bits
 Used for I/O flags and for recording errors
 Control Register Bits
 Define baud rate, no. of bits in each character, whether
 to generate and check parity, and no. of stop bits

Draw and Explain DMA?

Direct memory access (DMA) is a method that


allows an input/output (I/O) device to send or
receive data directly to or from the main memory,
bypassing the CPU to speed up memory operations.
The process is managed by a chip known as a
DMA controller

To transfer data between memory and I/O devices,


DMA controller takes over the control of the
system from the processor and transfer of data take
place over the system bus. For this purpose, the
DMA controller must use the bus only when the
processor does not need it, or it must force the processor to suspend operation temporarily. The later technique
is more common and is referred to as cycle stealing, because the DMA module in effect steals a bus cycle.
Explain Instruction Pipelining

In general, the instruction format is


[1] Fetch an instruction from memory
[2] Decode the instruction
[3] Calculate the effective address of the operand
[4] Fetch the operands from memory
[5] Execute the operation
[6] Store the result in the proper place
* Some instructions skip some phases
* Effective address calculation can be done in the part of
the decoding phase
* Storage of the operation result into a register is done
automatically in the execution phase
==> 4-Stage Pipeline
[1] FI: Fetch an instruction from memory
[2] DA: Decode the instruction and calculate the effective address of the operand
[3] FO: Fetch the operand
[4] EX: Execute the operation

Flow Chart for Instruction pipeline


Explain RISC Pipeline Architecture

RISC
 Machine with a very fast clock cycle that executes at the rate of one instruction per cycle
 Simple Instruction Set
 Fixed Length Instruction Format
 Register-to-Register Operations

Data Manipulation Instructions


I: Instruction Fetch
A: Decode, Read Registers, ALU Operations
E: Write a Register
Load and Store Instructions
I: Instruction Fetch
A: Decode, Evaluate Effective Address
E: Register-to-Memory or Memory-to-Register
Program Control Instructions
I: Instruction Fetch
A: Decode, Evaluate Branch Address
E: Write Register(PC)

Booth Multiplication Algorithm?

Booth’s Algorithm is used for signed multiplication of Binary numbers


The Flow chart for the algorithm is shown →

You might also like