Professional Documents
Culture Documents
15 14 12 11 0
| I | Opcode | Address | Instruction Format
↑
Addressing Mode
What are Instruction Formats?
ANS:
The Basic computer has 3 Instruction code formats
Each has 16 bits
The OPCODE part contains of 3 bits and the meaning of the remaining 13 bits depend on the OPCODE
A memory reference instruction uses 12 bits to specify the address and 1 bit to specify the addressing
mode I
I is equal to 0 for Direct addressing and 1 for Indirect addressing.
The Register reference instructions are recognized by the opcode 111 with 0 in the Left Most Bit (bit
15) of the instruction
a Register reference instruction specifies an operation on AC register.
The other 12 bits are used to specify the operations to be executed
An input output instruction does not need a reference to memory and is recognized by the operation
code 111 with a 1 in the Left Most Bit (bit 15) of the instruction
The other 12 bits are used to specify the type of Input-Output operations performed
If the 3 OPCODE bits in positions 12 through 14 are not equal to 111, the instruction is a memory
reference type of instruction and the bit position 15 is taken as addressing mode I
If the 3-bit OPCODE is equal to 111, control then inspects the bit in position 15.
If the bit is 0, the instruction is Register Reference
If the bit is 1, the instruction is Input-Output Referenced
15 14 12 11 0
15 12 11 0
(OPCODE=111, I=0)
Register Reference Instruction
15 12 11 0
(OPCODE=111, I=1)
Input-Output Instruction
Explain 4-Bit Bus structure with Diagram?
ANS:
S0 S1 Register
Selected
0 0 A
0 1 B
1 0 C
1 1 D
Function Table
When the 2 selection lines value is 0,0 then the 0th input of all multiplexers are selected and the content
of the 0th pin is transferred to the output of multiplexers, for multiplexer 0, the 4 inputs are selected from
4 Registers (A0, B0, C0, D0)
As we don’t have values for multiplexers 3,2,1,0th inputs from register D, B, C we have register A.so the
content of register A is transferred to the output of multiplexer 0
If S1, S0 values are (0,1), the first input of multiplexers gets selected, content of register B is transferred
to output using mux-1 and is similar with remaining values
Explain INTEL’s Evolution of X86 Architecture?
ANS:
The x86 incorporatesthe sophisticated design principles once found only on mainframes and Super Computers
and serves as an excellent example of CISC design. An alternative approach to processor design in the reduced
instruction set computer (RISC). The ARM Architecture is used in a wide variety of embedded systems and is
oe of the most powerful and best-designed RISC-based systems on the market.
8080:
The world’s first general-purpose microprocessor. This was an 8-bit machine, with an 8-bit data path to
memory. The 8080 was used in the first personal computer, the Altair.
8086:
A far more powerful, 16-bit machine. In addition to a wider data path and larger registers, the 8086 sported an
instruction cache, or queue, that pre-fetches a few instructions before they are executed. A variant of this
processor, the 8088, was used in IBM’s first personal computer, securing the success of Intel. The 8086 is the
first appearance of the x86 architecture.
80286:
This extension of the 8086 enabled addressing a 16-MByte memory instead of just 1 M-Byte.
80386:
Intel’s first 32-bit machine. With a 32-bit architecture,the 80386 rivaled the complexity and power of
minicomputers and mainframes introduced just a few years earlier.
This was the first Intel processor to support multitasking, meaning it could run multiple programs at the
same time.
80486:
The 80486 introduced the use of much more sophisticated and powerful cache technology and sophisticated
instruction pipelining. The 80486 also offered a built-in math coprocessor, offloading complex math operations
from
the main CPU.
Pentium:
With the Pentium, Intel introduced the use of superscalar techniques, which allow multiple instructions to
execute in parallel.
Pentium Pro:
The Pentium Pro continued the move into superscalar organization begun with the Pentium, with aggressive use
of register renaming, branch prediction, data flow analysis, and speculative execution.
Pentium II:
The Pentium II incorporated Intel MMX technology, which is designed specifically to process video, audio, and
graphics data efficiently
Pentium III:
The Pentium III incorporates additional floating-point instructions to support 3D graphics software.
Pentium 4:
The Pentium 4 includes additional floating-point and other enhancements for multimedia.
Core:
This is the first Intel x86 microprocessor with a dual core, referring to the implementation of two processors on
a single chip.
Core 2:
The Core 2 extends the architecture to 64 bits. The Core 2 Quad provides four processors on a single chip
SELECTIVE SET: The selective set Logic Micro Operation sets to 1, the bits in Register A Where there are
corresponding 1’s in Register B. It doesn’t affect the Bit positions that have 0’s in B
It is Similar to Logical “OR” Operation i.e.(A+B)
E.g.
A 1010
B 1100
SELECTIVE COMPLEMENT: The Selective Complement Micro Operation complements bits in register A
where there are corresponding 1’s in Register B. It doesn’t affect the bit positions that have 0’s in B
It is Similar to Logical “XOR” Operation i.e.(A B)
E.g.
A 1010
B 1100
SELECTIVE CLEAR: The Selective Clear Micro Operation Clears the 0’th Bits in Reg A where there re
corresponding 1’s in Reg B, it doesn’t affect the bit positions that have 0’s in B.
It is Similar to Logical (A.B’)
E.g.
A 1010
B 1100
CLEAR: The Clear Micro Operation compares the words in A and B and produces an all zero result if the
values present in the both the registers are equal this is achieved by XOR operation
It is Similar to Logical “XOR” Operation i.e.(A B)
E.g.
A 1010
B 1010
INSERT: This operation inserts a new value into a Group of bits, this is done by first Masking the bits and
Doing an OR operation with the required value
E.g.
Question: A 0110 1010 Right Mask 4 bits with 1111
Then A 0110 1010
Masking 1111 0000
A 0110 0000
Adding 1111 0000 1111
Result A 0110 1111
Explain Arithmetic Micro Operations?
1. Addition
2. Subtraction
3. Increment
4. Decrement
And additional Arithmetic Micro operations are
1. Add with carry
2. Subtract with borrow
3. Load / transfer
4-Bit Arithmetic Circuit
Explain Logic Micro Operations?
ANS:
►Specify binary operations on the strings of bits in registers
►Logic micro operations are bit-wise operations, i.e., they work on the individual bits of data
►useful for bit manipulations on binary data
►useful for making logical decisions based on the bit value
►There are, in principle, 16 different logic functions that can be defined over two binary input variables
►most systems only implement four of these
1. AND ()
2. OR ()
3. XOR ()
4. Complement/NOT
Truth tables for 16 functions of 2 variables and the corresponding 16 logic Micro-Operations
Explain Arithmetic Logic Shift Unit (One Stage)?
ANS:
List Various Basic Computer Registers with a Diagram?
ANS:
List and Explain Various Addressing Modes?
ANS:
ADDRESSING MODES: The Different ways in which the location of an operand is specified in an instruction
is called as an ADDRESSING MODE
There are different kinds of Addressing Modes
1.Implied Addressing mode:
►In this mode the operands are specified in the definition of the instruction itself
►All Register reference instructions that use accumulator are implied mode instructions
►Zero Address instructions in a stack organized computer are implied mode instructions
►E.g. CMP AC
It is an implied mode instruction because the operand in the accumulator register is implied in the
definition of the instruction
2.Immediate Addressing Mode:
►Operand is given explicitly in the instruction
►no memory reference to fetch data
►Fast instruction
E.g. ADD 5
Here ADD is the opcode and 5 is the operand, which is supplied with the instruction itself
3.Register Addressing Mode:
►Operand is Held in the Register named in the Address field
►Effective Address (EA) = R
►Fetches the instruction faster
►No Memory Access
►Very fast Execution
Register Addressing Diagram
Pointer to
operand Operand
5.Auto Increment Addressing Mode:
►This is similar to the register indirect mode except that the register is incremented after its value is used to
access memory
Operand
Pointer to
operand
Operand
7.Relative Addressing Mode:
►In this mode the content of the PC is added to the address part of the instruction to obtain the EA
►I.e. EA (Effective Address) = Address part of the instruction + content of the Program Counter
2. RISC machines mostly uses hardwired control unit: Most of the RISC processors are based on the
hardwired control unit design approach. In hardwired control unit, the control units use fixed logic circuits to
interpret instructions and generate control signals from them. It is significantly faster than its counterpart but are
rather inflexible.
3. RISC processors consume less power and have high performance: RISC processors have been known to
be heavily pipelined this ensures that the hardware resources of the processor are utilized to a maximum giving
higher throughput and also consuming less power.
4. Each instruction is very simple and consistent: Most instructions in a RISC instruction set are very simple
that get executed in one clock cycle.
5. RISC processors use simple addressing modes: RISC processors don’t have as many addressing modes
and the addressing modes these processors have are rather very simple. Most of the addressing modes are for
register operations and do not refer memory.
6. RISC instruction is of uniform fixed length: The decision of RISC processor designers to provide simple
addressing modes leads to uniform length instructions. For example, instruction length increases if an operand
is in memory as opposed to in a register. a. This is because we have to specify the memory address as part of
instruction encoding, which takes many more bits. This complicates instruction decoding and scheduling.
7. Large Number of Registers: The RISC design philosophy generally incorporates a larger number of
registers to prevent in large amounts of interactions with memory
1. CISC chips have complex instructions: A CISC processor would come prepared with a specific instruction
(call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the
operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of
multiplying two numbers (2, 3) can be completed with one instruction: MULT 2, 3
MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and
does not require the programmer to explicitly call any loading or storing functions. It closely resembles a
command in a higher level language.
2. CISC processors have a variety of instructions: There are a variety of instructions many of which are
complex and thus make up for smaller assembly code thus leading to very low RAM consumption.
3. CISC machines generally make use of complex addressing modes: CISC processes have a variety of
different addressing modes in which the operands can be addressed from the memory as well as located in the
different registers of the CPU.
There are many instructions that refer memory as opposed to RISC architecture
4. CISC processors have variable length instructions: The decision of CISC processor designers to provide a
variety of addressing modes leads to variable-length instructions. For example, instruction length increases if an
operand is in memory as opposed to in a register.
5. Easier compiler design: Compilers have very little to do when executing on a computer having CISC
architecture. The complex instruction set and smaller assembly code meant little work for the compiler and thus
eased up compiler design
6. CISC machines uses micro-program control unit: CISC uses micro programmed control unit. These
systems consist of micro programs which are nothing but series of microinstructions, which control the CPU at
a very fundamental level of hardware circuitry. This is then stored in a control memory like ROM from where
the CPU accesses them and generates control signals.
7. CISC processors are having limited number of registers: CISC processors normally only have a single set
of registers. Since the addressing modes give provisions for memory operands, limited number of “costly”
register memory is sufficient for the functions.
AND ------ AND memory word to AC BSA ------ Branch and save return address
ADD ------ Add memory word to AC ISZ ------ Increment and skip if zero
LDA ------ Load AC from memory
STA ------ Store content of AC into memory
BUN ------ Branch unconditionally
What are Instruction Formats, Explain?
ANS:
►The number of address fields in the instruction format depends on the internal organization of CPU
Basically the Instruction Formats are
1. 3 Address Instruction
2. 2 Address Instruction
3. 1 Address Instruction
4. 0 Address Instruction
Three Address Instruction:
Program to evaluate X = (A + B) * (C + D)
ADD R1, A, B
ADD R2, C, D
MUL X, R1, R2
Results in short programs, Instruction becomes long (many bits)
Two-Address Instructions:
Program to evaluate X = (A + B) * (C + D)
MOV R1, A
ADD R1, B
MOV R2, C
ADD R2, D
MUL R1, R2
MOV X, R1
One-Address Instructions:
Program to evaluate X = (A + B) * (C + D)
LOAD A
ADD B
STORE T
LOAD C
ADD D
MUL T
STORE X
Uses an implied AC register for all data manipulation
Zero Address Instruction:
Program to evaluate X = (A + B) * (C + D)
PUSH A
PUSH B
ADD
PUSH C
PUSH D
ADD
MUL
POP X
Explain Memory Hierarchy in computer?
ANS:
Memory unit is essential component of
digital computer since it is needed for
storing programs and data.
Memory unit that communicates directly
with CPU is called Main memory.
Devices that provide backup storage is
called auxiliary memory.
Only programs and data currently needed
by processor reside in the main memory.
All other information is stored in auxiliary
memory and transferred to main memory
when needed.
Memory hierarchy system consist of all
storage devices from auxiliary memory to
main memory to cache memory
As one goes down the hierarchy:
a. Cost per bit decreases.
b. Capacity increases.
c. Access time increases.
d. Frequency of access by the processor decreases.
What is MAIN MEMORY, Explain?
ANS:
It is the memory used to store programs and data during the computer operation.
The principal technology is based on semiconductor integrated circuits.
It consists of RAM and ROM chips.
RAM chips are available in two form static and dynamic.
ROM is uses random access method.
It is used for storing programs that are permanent and the tables of constants that do not change.
ROM store program called bootstrap loader whose function is to start the computer software when the
power is turned on.
When the power is turned on, the hardware of the computer sets the program counter to the first address
of the bootstrap loader.
Differentiate SRAM & DRAM
SRAM DRAM
More cells per unit area due to smaller cell size. Needs more space for same capacity
When the CPU refers to memory and finds the word in the Cache it is said to be the HIT, if not then it’s
a miss
The performance of cache memory is frequently measured in terms of quantity called HIT Ratio
Hit Ratio = Hit/(Hit+Miss)
What is Associative Mapping? Set associative Mapping? Direct
Mapping?
Associative Mapping:
Here the 15 Bit address is loaded as 5-digit octal number and 12-bit
data word is loaded as 4-digit octal number into the argument
registers
The 15 Bit CPU address is found in the argument register
The corresponding data word will be loaded into cache memory and
CPU reads the data from the Cache Memory
If it is not found, then it permits any location in cache memory to
store a word from main memory
Direct Mapping:
Here the n-Bit memory address is divided in to 2 Fields
1. K-Bits for the Index
2. N-K Bits for the Tag
Here for a single index value there can exists a single tag value
The figure shows the main memory needs the address that include both the Tag and Index bits.
The no of bits in the index field is equal to no of address bits required to access the cache memory
The direct mapping cache organization uses n-bit
address to access the main memory and k-bit
index to access the cache memory
The internal organization of the cache memory is
shown →
Each word in cache consist of data word and its
associated tag, when a new word is 1st brought
into the cache, the tag bits are stored alongside
the data bits
When the CPU generates memory request, the
index field is used for address to access the cache
The tag field of the CPU address is compared
with the tag field of cache memory
If the 2 tags are matched, then there is a HIT &
desired word is in cache memory
If 2 tags aren’t matched, then CPU refers to Main
memory.
In a memory hierarchy system, programs and data are first stored in auxiliary memory.
Portions of a program or data are brought into main memory as they are needed by the CPU.
Virtual memory is a concept used in some large computer systems that permit the user to construct
programs as though a large memory space were available, equal to the totality of auxiliary memory.
Each address that is referenced by the CPU goes
through an address mapping from the so-called
virtual address to a physical address in main
memory.
Virtual memory is used to give programmers the
illusion that they have a very large memory at their
disposal, even though the computer actually has a
relatively small main memory.
A virtual memory system provides a mechanism for
translating program-generated addresses into correct
main memory locations.
This is done dynamically, while programs are being
executed in the CPU.
The translation or mapping is handled automatically
by the hardware by means of a mapping table.
An address used by a programmer will be called a
virtual address, and the set of such addresses the address space.
An address in main memory is called a location or physical address. The set of such locations is called
the memory space.
Thus the address space is the set of addresses generated by programs as they reference instructions and
data.
the memory space consists of the actual main memory locations directly addressable for processing.
In most computers the address and memory spaces are identical.
The address space is allowed to be larger than the memory space in computers with virtual memory
FIFO:
The FIFO algorithm selects for replacement the page that has been in memory the longest time.
Each time a page is loaded into memory, its identification number is pushed into a FIFO stack.
FIFO will be full whenever memory has no more empty blocks.
When a new page must be loaded, the page least recently brought in is removed.
The page to be removed is easily determined because its identification number is at the top of the FIFO
stack.
The FIFO replacement policy has the advantage of being easy to implement.
It has the disadvantage that under certain circumstances pages are removed and loaded from memory too
frequently.
LRU:
The LRU policy is more difficult to implement but has been more attractive on the assumption that the
least recently used page is a better candidate for removal than the least recently loaded page as in FIFO.
The LRU algorithm can be implemented by associating a counter with every page that is in main
memory.
When a page is referenced, its associated counter is set to zero.
At fixed intervals of time, the counters associated with all pages presently in memory are incremented
by 1.
The least recently used page is the page with the highest count.
The counters are often called aging registers, as their count indicates their age, that is, how long ago
their associated pages have been referenced.
RAID 3:
This technique uses striping and dedicates one drive to storing parity
information.
The embedded Parity information is used to detect errors.
Data recovery is done by calculating the XOR of the information
recorded on the other drives.
Since an I/O operation addresses all the drives at the same time, RAID 3
cannot overlap I/O.
For this reason, RAID 3 is best for single-user systems with long record
applications.
RAID 4:
This level uses large stripes, which means you can read records
from any single drive.
This allows you to use overlapped I/O for read operations.
Since all write operations have to update the parity drive, no
I/O overlapping is possible.
RAID 4 offers no advantage over RAID 5.
RAID 5:
This level is based on block-level striping with parity.
The parity information is striped across each drive, allowing the array to function even if one drive were
to fail.
The array's architecture allows read and write operations to span multiple drives.
This results in performance that is usually better than that
of a single drive, but not as high as that of a RAID 0 array.
RAID 5 requires at least three disks, but it is often
recommended to use at least five disks for performance
reasons.
RAID 5 arrays are generally considered to be a poor choice
for use on write-intensive systems because of the
performance impact associated with writing parity
information.
When a disk does fail, it can take a long time to rebuild a
RAID 5 array.
Performance is usually degraded during the rebuild time, and the array is vulnerable to an additional
disk failure until the rebuild is complete.
RAID 6:
This technique is similar to RAID 5, but includes
a second parity scheme that is distributed across
the drives in the array.
The use of additional parity allows the array to
continue to function even if two disks fail
simultaneously.
However, this extra protection comes at a cost.
RAID 6 arrays have a higher cost per gigabyte
(GB) and often have slower write performance
than RAID 5 arrays.
b) Handshaking
A control signal is accompanied with each data being transmitted to indicate the presence of The
receiving unit responds with another control signal to acknowledge receipt of the data
Strobe Control:
* Employs a single control line to time each transfer
* The strobe may be activated by either the source or the destination unit
Transmitter Register
Accepts a data byte(from CPU) through the data bus
Transferred to a shift register for serial transmission
Receiver
Receives serial information into another shift register
Complete data byte is sent to the receiver register
Status Register Bits
Used for I/O flags and for recording errors
Control Register Bits
Define baud rate, no. of bits in each character, whether
to generate and check parity, and no. of stop bits
RISC
Machine with a very fast clock cycle that executes at the rate of one instruction per cycle
Simple Instruction Set
Fixed Length Instruction Format
Register-to-Register Operations