You are on page 1of 64

MPI: Microprocessors and Interfacing

Academic Year: 2022 - 23

Dr. Praveen Kumar Alapati


praveenkumar.alapati@mechyd.ac.in
praveenkumar.alapati@mahindrauniversity.edu.in
Department of Computer Science and Engineering
Ecole Centrale School of Engineering

Dr. Praveen (Mahindra University) MPI CS3106 1 / 61


MPI: Microprocessors and Interfacing

Module -1
I Introduction to Microprocessors
I Instruction Set Architecture
I Micro-architecture
I Scalar Processor
I Super-scalar Processor
I Simultaneous Multi-threading Processor
I Flynn’s classification

Dr. Praveen (Mahindra University) MPI CS3106 2 / 61


MPI: Microprocessors and Interfacing
Module -2
I Introduction to 8086 Processor
I 8086 PIN Diagram
I 8086 Internal Architecture
I 8086 Instruction Set
I 8086 Addressing Modes
I 8086 Assembly Language Programs
Module -3
I Introduction to Interfacing
I Semiconductor Memories and Interfacing
I Read Only Memory Interfacing Techniques
I Static Random Access Memory Interfacing Techniques
I Dynamic RAM Interfacing

Dr. Praveen (Mahindra University) MPI CS3106 3 / 61


MPI: Microprocessors and Interfacing

Module -4
I Interrupts: Internal and External Interrupts
I 8086 Interrupt Types
I Accessing I/O Devices: I/O mapped I/O and Memory Mapped I/O
I Interrupted I/O
Module -5
I Graphics Processing Units
I Case Study on Intel X - Series Processors
I Case study on AMD Zen -Series Processors
I Case study on Arm Processors.

Dr. Praveen (Mahindra University) MPI CS3106 4 / 61


MPI: Microprocessors and Interfacing

Reference Books:
1 Microprocessors and Interfacing, 3rd edition (3e), Douglas V Hall and
SSSP Rao, McGraw Hill.

2 Computer Organization, Fifth Edition, Carl Hamacher, Zvonko


Vranesic, SafWat Zay.

3 https://en.wikichip.org/wiki/WikiChip

Dr. Praveen (Mahindra University) MPI CS3106 5 / 61


MPI: Microprocessors and Interfacing

Lab# Due Date


Lab 1 August 24, 2022 MPI Lab 30 Marks
Lab 2 August 31, 2022 Attendance 10 Marks
Lab 3 September 10, 2022 Minor 1 and 2 30 Marks
Lab 4 September 20, 2022 End Exam 30 Marks
Buffer Lab
Table 2: Marks Distribution
Lab 5 October 10, 2022
Lab 6 October 20, 2022
Submission Guide Lines:
Lab 7 October 30, 2022
Buffer Lab I Max. team size is 4.
I Mail-ID: cs3106.mpi@gmail.com
Lab 8 November 20, 2022
I Sub:TEAM NUM LAB NUM
Lab 9 November 30, 2022 I Attach.Name and Type: (Sub.).zip
Lab 10 December 10, 2022 I Late Submission<=3-Days:50%.
I Write a readme file to understand your
Table 1: MPI Lab# and Due Date solutions.

Dr. Praveen (Mahindra University) MPI CS3106 6 / 61


A Few Desktop Processors

Intel-i7-7920-4C-8T Intel-i9-9900-8C-16T

AMD-Ryzen-3900-12C-24T Threadripper-3990x-64C-128T
Dr. Praveen (Mahindra University) MPI CS3106 7 / 61
Single-core with Hyper Threading

Dr. Praveen (Mahindra University) MPI CS3106 8 / 61


Execution Time
Let t1 , t2 , t3 ... tn be timePperiods to execute ’n’ instructions respectively,
then the total time (T)= ni=1 ti
where ti = Number of clock cycles × Clock cycle time
Execution time of an instruction/operation depends on:
I Type of operation
I Type of operands
I Location of operands
I Type of circuit

Goal: Reduce the time/space/power of an execution.


power ∝ C V 2 f
Where C is Capacitance, V is Voltage, and f is frequency.
we know that V ∝ f .
∴ power ∝ f 3 .
DVFS: Dynamic Voltage and Frequency Scaling.

Dr. Praveen (Mahindra University) MPI CS3106 9 / 61


CPU Trends
SPEC: Standard Performance Evaluation Corporation

Dr. Praveen (Mahindra University) MPI CS3106 10 / 61


Components of a Computer

Dr. Praveen (Mahindra University) MPI CS3106 11 / 61


Microprocessors and Interfacing
I Microprocessor is an
integrated circuit that is
responsible to perform a set of
operations (Arithmetic, Logical, Figure 1: Intel 4004, 4-bit Processor
Data and Control Transfer) and
generate control signals to
perform the set of operations.
I Interfacing is a method of
exchanging data between a
microprocessor and a device
(co-processor, peripheral
device, memory device, ...etc.)
I Designers of Intel-4004
processor: Faggin, Ted Hoff,
Stanley Mazor (Intel), and
Masatoshi Shima (Busicom). Figure 2: Intel 4004 Motherboard
Dr. Praveen (Mahindra University) MPI CS3106 12 / 61
Microprocessors and Interfacing

Figure 3: Intel 8086, 16-bit


Processor

Figure 5: Peripheral Devices

Figure 4: ThreadRipper, 64-bit


Processor Figure 6: Different Ports
Dr. Praveen (Mahindra University) MPI CS3106 13 / 61
Microprocessors

I Could you tell the names of a few microprocessors?


I Intel i3/i5/i7/i9 and Xeon processors
I AMD (Advanced Micro Devices) Athlon, Zen, and Thread-Ripper
I PowerPC (Performance Optimization With Enhanced RISC
Performance Computing)

I Name of the first microprocessor : Intel 4004


I 4-bit (Year 1971)
I 2300 Transistors
I 740kHz
I 10µm (Avg.length between source and drain)
Transistor types : BJT, FET, FinFET, GAAFET
BJT: Bipolar Junction Transistor (Refer).
GAAFET: Gate-All-Around (GAA) Field-Effect Transistor(FET).

Dr. Praveen (Mahindra University) MPI CS3106 14 / 61


Brief History of Processors

Name Cores(Threads) Year Frequency


4004 1C 1971 740 kHz
8086 1C 1978 5-10 MHz
80486 1C 1992 20-33 MHz
Intel Pentium 1C 1993 60-300 MHz
Pentium 4 1C 2001 1.3 GHz
Duel Core 2C 2006 2.0 GHz
i3 (6100) 2C(4T) 2015 3.7 GHz
i7 (8565U) 4C(8T) 2018 1.8/4.6 GHz (Base/Boost)
i9 (9900K) 8C(16T) 2018 5.0(1 Core), 3.6/4.7 GHz
AMD Ryzen9(3900) 12C(24T) 2019 3.8/4.8 GHz
Threadripper(3990X) 64C(128T) 2020 2.9/4.3 GHz
ARM

Dr. Praveen (Mahindra University) MPI CS3106 15 / 61


Microprocessors
I How do you classify the microprocessors?
I 4/8/16/32/64 - bit microprocessors
I CISC (Complex Instruction Set Computer) / RISC (Reduced
Instruction Set Computer) processors (Table: 12)
I Scalar processor/ Super-scalar processor (ref. to slide:47)
I APUs (Accelerated Processing Units) vs. TPUs (Tensor Processing
Units)
I Flynn’s classification:(Image on slide 60)
I SISD: Single Instruction Single Data
I SIMD: Single Instruction Multiple Data
I MISD: Multiple Instruction Single Data
I MIMD: Multiple Instruction Multiple Data
I Jack Kilby realized the first IC (1958, Texas Instruments) and
received the Nobel Prize in Physics (2000). Robert Noyce also
realized an IC and substantially improved the IC developed by Jack.
I Hennessy and Patterson received the ACM Turing Award (2017)
for their work on RISC processors.
Dr. Praveen (Mahindra University) MPI CS3106 16 / 61
How a Memory Request will be Addressed

Dr. Praveen (Mahindra University) MPI CS3106 17 / 61


Memory System or Memory Hierarchy

Objective: Memory would be fast, large, and inexpensive.

Dr. Praveen (Mahindra University) MPI CS3106 18 / 61


Processor Registers
Memory Address Register (MAR), Memory Data Register (MDR),
Program Counter (PC), Instruction Register (IR), and General Purpose
Registers (GPRs).

Dr. Praveen (Mahindra University) MPI CS3106 19 / 61


x86-64-bit Processor Registers

Stack frame:61
Dr. Praveen (Mahindra University) MPI CS3106 20 / 61
x86-64-bit Processor Registers

Dr. Praveen (Mahindra University) MPI CS3106 21 / 61


Flags Register or CCR

Dr. Praveen (Mahindra University) MPI CS3106 22 / 61


Cache Memory
Cache memory is a small-sized volatile memory that provides high-speed
access to a processor and stores frequently used instructions and data.

Dr. Praveen (Mahindra University) MPI CS3106 23 / 61


Cache Memory

I Locality of reference: If a processor accesses some data now, the


same data or neighbouring data will be accessed in near feature.
I Temporal Locality: Accesses to the same memory location that
occur close together in time.
I Spacial Locality: Accesses to the memory locations that are close
together in space.
I Cache miss is a state where the data requested for processing is not
found in the cache memory.
Types of Cache Misses:
1 Compulsory or Cold Misses: The first reference to a block of
memory, starting with an empty cache.
2 Capacity Misses: The cache is not big enough to hold every block you
want to use.
3 Conflict Misses: Two blocks are mapped to the same location and
there is not enough room to hold both.

Dr. Praveen (Mahindra University) MPI CS3106 24 / 61


Mappings of Cache Memory

Dr. Praveen (Mahindra University) MPI CS3106 25 / 61


Cache-Memory hierarchy of an AMD Bulldozer Server

Dr. Praveen (Mahindra University) MPI CS3106 26 / 61


Cache-Memory hierarchy of an AMD Bulldozer Server

Dr. Praveen (Mahindra University) MPI CS3106 26 / 61


Cache Memory (A Few Important Points)

I Valid bit says whether the cache block has a valid data or not.

I Dirty bit (modify bit) says whether the contents of the cache
line/block are different to what are there in main memory.

I Inclusive Cache: L1 ⊂ L2 ⊂ L3

I Exclusive Cache: L1 ∩ L2 ∩ L3 = ∅

I Non-inclusive Cache : (L1 ∩ L2 = ∅) and ((L1 ∪ L2) ∩ L3 = L1 ∪ L2)

Dr. Praveen (Mahindra University) MPI CS3106 27 / 61


A Typical Read/Write Operation between a Processor and a Memory

Dr. Praveen (Mahindra University) MPI CS3106 28 / 61


Read Operation

1 Processor reads data from the memory by loading the required


address into MAR.

2 Set R/W line to 1.

3 The memory responds by placing the data from the addressed location
onto datalines,and issues MFC (Memory Function Complete) signal.

4 Upon receiving MFC signal, the processor loads the data on the data
lines into the MDR register.
Memory Access Time (MAT) is the time between the Read and
MFC signal.

Dr. Praveen (Mahindra University) MPI CS3106 29 / 61


Write Operation

1 Processor writes data into a memory by loading the required address


into MAR and loading data into MDR.

2 Set R/W line to 0.

3 Place the contents of MDR into data bus and wait for MFC (Memory
Function Complete) signal.

4 Upon receiving MFC signal, the next memory operation will be


initiated.
Memory Cycle Time (MCT) is the minimum time delay required
between the initiation of two successive memory operations.
MAT < MCT.

Dr. Praveen (Mahindra University) MPI CS3106 30 / 61


Memory Access

Let h1, h2, h3 be the hit ratios of L1, L2 and L3 caches and ’tmm ’, ’tcm1 ’,
’tcm2 ’, and ’tcm3 ’ be the access times of main memory, L1, L2, and L3,
respectively. Then Average Memory Access Time (AMAT)=
h1.tcm1 +(1−h1).h2.tcm2 +(1−h1).(1−h2).h3.tcm3 +(1−h1).(1−h2).(1−h3).tmm

Dr. Praveen (Mahindra University) MPI CS3106 31 / 61


Instruction Cycle (RISC Processor)

Address of instruction
Instruction Fetch ATU Cache

Instruction word
Instruction Decode Control Unit
Addresses of
source
operands Operand Fetch Register File

Opcode, Operands Arithmetic and Logic Unit

Execute ATU Cache


Address of destination
operand, Result
Writeback Result Register File
1

Dr. Praveen (Mahindra University) MPI CS3106 32 / 61


Instruction Cycle in CISC Processors

Address of instruction
Instruction Fetch ATU Cache

Instruction word
Instruction Decode Control Unit
Addresses of
source Register File
operands Operand Fetch
ATU Cache

Opcode, Operands
Execute Arithmetic and Logic Unit

Address of destination
operand, Result Register File
Writeback Result
ATU Cache
14

Dr. Praveen (Mahindra University) MPI CS3106 33 / 61


Types of SDRAM

Observations across different generations (SDRAM to DDR4):


I Increased data-rate
I Increased capacity
I Decrease in power consumption

Dr. Praveen (Mahindra University) MPI CS3106 34 / 61


Virtual Memory

A technique for moving data between main memory and secondary storage.

Dr. Praveen (Mahindra University) MPI CS3106 35 / 61


Virtual Memory

A technique for moving data between main memory and secondary storage.

Dr. Praveen (Mahindra University) MPI CS3106 35 / 61


Types of Data

I Numerical Data
I Integers
I Reals
I Character Data
I char
I varchar
I Signal Data
I Audio
I Video
I Speech
I Image

Dr. Praveen (Mahindra University) MPI CS3106 36 / 61


ISA: Instruction Set Architecture

Interface between the high-level language and the machine language


I Instruction Set
I Add, Sub, Mul, Div, ...
I AND, OR, ...
I Addressing Modes
I Register
I Direct
I Immediate ...
I Instruction Representation
I 3-Operand Instruction
I 2-Operand Instruction
I 1-Operand Instruction
I 0-Operand Instruction
I Instruction Word

Dr. Praveen (Mahindra University) MPI CS3106 37 / 61


Instructions

I Data types
I Integers: Unsigned, Signed, Byte, Short, Long
I Real numbers: Single-precision (float), Double-precision (double)
I Operations
I Addition, Subtraction, Multiplication, Division
I Data Transfer
I Register Transfer: Move
I Memory transfer: Load, Store
I I/O transfer: In, Out
I Control Transfer: Unconditional and Conditional
I Logical instructions: AND, OR, XOR, SHIFT
I Arithmetic instructions: ADD, SUB, MUL, DIV
I Procedure Call
I Return

Dr. Praveen (Mahindra University) MPI CS3106 38 / 61


Instruction Representations

I 3-operand instructions: ADD op1, op2, op3;

I 2-operand instructions: ADD op1, op2;

I 1-operand instructions: INC op1;

I 0-operand instructions: PUSH, POP;

Dr. Praveen (Mahindra University) MPI CS3106 39 / 61


Addressing Modes

Specification of operands in instructions.


Different addressing modes:
I Register direct: Value of operand in a register
I Register indirect: Address of operand in a register
I Immediate: Value of operand
I Memory direct: Address of operand
I Indexed: Base register, Index register
I Relative: Base register, Displacement
I Indexed relative: Base register, Index register, Displacement

Dr. Praveen (Mahindra University) MPI CS3106 40 / 61


Instruction Word
Instruction word should have the complete information required to fetch
and execute the instruction.
Fields of an instruction word
I Opcode of the operation to be carried out: Varying length (CISC)
and Fixed length (RISC).
I Size of the operands: Integer operands ( Byte, Word, Long-word,
Quad-word) and Real operands (float and double).
I Addressing mode (AM) of each operand and specification of each
operand involves specifying one or more of the following:
I General purpose register
I Value of an immediate operand
I Address of operand
I Base register, Index register and Displacement
I Effect of instruction word:
I Instruction length
I Number of instructions for a program
I Complexity of instruction decoding (Control unit)
Dr. Praveen (Mahindra University) MPI CS3106 41 / 61
Micro-architecture or µarch
Micro-architecture tells how an ISA is realized for a processor.
Control Sequence of Instruction:
ADD R3, R1 ; Perform R1 = R1 + R3

1 PCout , MARin , READ, Select 4,


ADD, Yin
2 Yout , PCin , WMF C
3 MDRout , IRin
4 R1out , Xin
5 R3out , Select X , ADD, Yin
6 Yout , R1in , END
Figure 7: Single Bus Organization
µOPs of ADD R3, R1
The registers, the ALU, and the interconnecting bus are collectively referred to as
the Datapath.
Control Unit (CU) tells the datapath, memory, and I/O devices what to do.
Dr. Praveen (Mahindra University) MPI CS3106 42 / 61
Micro-architecture or µarch

Micro-architecture tells how an ISA is realized for a processor.


Control Sequence of Instruction:
ADD (R3), R1 ; Perform R1 = R1 + [R3]

1 PCout , MARin , READ, Select 4,


ADD, Yin
2 Yout , PCin , WMF C
3 MDRout , IRin
4 R3out , MARin , READ
5 R1out , Xin , WMF C
6 MDRout , Select X , ADD, Yin
7 Yout , R1in , END Figure 8: Single Bus Organization
µOPs of ADD (R3), R1

Dr. Praveen (Mahindra University) MPI CS3106 43 / 61


Multiple-Bus Organization: µarch
Micro-architecture tells how an ISA is realized for a processor.

Control Sequence of Instruction:


ADD R3, R1 ; Perform R1 = R1 + R3

1 PCout , R = B, MARin , READ,


IncPC
2 WMF C
3 MDRoutB ,R = B, IRin
4 R3outA , R1outB , Select A, ADD,
R1in , END

µOPs of ADD R3, R1

Figure 9: Three Bus Organization


Dr. Praveen (Mahindra University) MPI CS3106 44 / 61
Figure 10: Block Diagram: AMD Bulldozer Server:8-core,32nm, Opteron, 2011.

Dr. Praveen (Mahindra University) MPI CS3106 45 / 61


µarch of AMD Bulldozer Server Module

Dr. Praveen (Mahindra University) MPI CS3106 45 / 61


Non-pipelined Execution of Instructions

Instruction 1 IF ID OF EX WB

Instruction 2 IF ID OF EX WB

Instruction 3 IF ID OF EX WB

Instruction N-1 IF ID OF EX WB

Instruction N IF ID OF EX WB
1

Dr. Praveen (Mahindra University) MPI CS3106 46 / 61


Pipelined Execution of Instructions

Pipelining is an implementation technique where multiple instructions are


overlapped in execution.

Instruction 1 2 3 4 5 6 7 8 9 10
I1 IF ID OF EX WB
I2 IF ID OF EX WB
I3 IF ID OF EX WB
I4 IF ID OF EX WB
I5 IF ID OF EX WB
I6 IF ID OF EX WB

I Non-pipelined takes: 5N cycles


I Pipeline takes : 5+N−1 cycles
1
Back to slide 16
Dr. Praveen (Mahindra University) MPI CS3106 47 / 61
Pipelining: Sample Assembly Code

For the following assembly code, give a pipelined execution using 5-stage
pipeline (5-stages are IF, ID, OF, EX, and WB).

ADD R1, R2, R3


SUB R4, R5, R6
ADD R7, R8, R9
MUL R10, R11, R1
MUL R12, R3, R14
MUL R2, R5, R6
MUL R4, R1, R11
Table 3: Sample Assembly Code

Assume that each stage takes 1 clock cycle.

Dr. Praveen (Mahindra University) MPI CS3106 48 / 61


Pipelined Execution of the sample code (Ref. Table 3)

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD R1, R2, R3 IF ID OF EX WB
I2 SUB R4, R5, R6 IF ID OF EX WB
I3 ADD R7, R8, R9 IF ID OF EX WB
I4 MUL R10, R11, R1 IF ID OF EX WB
I5 MUL R12, R3, R14 IF ID OF EX WB
I6 MUL R2, R5, R6 IF ID OF EX WB
I7 MUL R4, R1, R11 IF ID OF EX WB

Table 4: Pipelined Execution

Assume that each stage takes 1 clock cycle.

Dr. Praveen (Mahindra University) MPI CS3106 49 / 61


Pipelining Hazards

Hazard prevents the execution of next instruction during its designated


clock cycle.
I Data Hazards arise when an instruction depends on the result of a
previous instruction in a pipelined execution.

I Control Hazards arise when an instruction changes the contents of


PC.

I Structural Hazards arise when multiple instructions put a request for


the same resource and it is unable to serve all the requests in a
particular time period.

Dr. Praveen (Mahindra University) MPI CS3106 50 / 61


Data Hazard

I.No. Instruction 1 2 3 4 5 6 7 8 9 10 11 12 13 14
I1 ADD R1,R2,R3 IF ID OF EX WB
I5 MUL R12,R3,R14 IF ID OF OF OF EX WB
I2 SUB R4,R5,R6 IF ID - - OF EX WB
I3 ADD R7,R8,R9 IF - - ID OF EX WB
I4 MUL R10,R11,R1 IF ID OF EX WB
I6 MUL R2,R5,R6 IF ID OF EX WB
I7 MUL R4,R1,R11 IF ID OF OF EX WB

Table 5: Pipelined Execution

Dr. Praveen (Mahindra University) MPI CS3106 51 / 61


Control Hazard

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD R1, R2, R3 IF ID OF EX WB
I2 JMP I7 IF ID
I3 ADD R7, R8, R9 IF
I4 MUL R10, R11, R1
I5 MUL R12, R3, R14
I6 MUL R2, R5, R6
I7 MUL R4, R1, R11 IF ID OF EX WB

Table 6: Pipelined Execution

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 SUB R4, R4, R4 IF ID OF EX WB
I2 JZ I7 IF ID OF EX
I3 ADD R1, R2, R3 IF ID OF
I4 MUL R10, R11, R1 IF ID
I5 MUL R12, R3, R14 IF
I6 MUL R2, R5, R6
I7 MUL R4, R1, R11 IF ID OF EX WB

Table 7: Pipelined Execution

Dr. Praveen (Mahindra University) MPI CS3106 52 / 61


Structural Hazard
I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD R1, R2, R3 IF ID OF EX WB
I2 SUB R4, R5, R6 IF ID OF EX WB
I3 ADD R7, R8, R9 IF ID OF EX WB
I4 MUL R10, R11, R1 IF IF IF IF ID OF EX WB
I5 MUL R12, R3, R14 IF ID OF EX
I6 MUL R2, R5, R6 IF ID OF
I7 MUL R4, R1, R11 IF IF

Table 8: Pipelined Execution

Dr. Praveen (Mahindra University) MPI CS3106 53 / 61


Super Scalar Processor
(Scalar refers to an Instruction)

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD R1, R2, R3 IF ID OF EX WB
I2 SUB R4, R5, R6 IF ID OF EX WB
I3 ADD R7, R8, R9 IF ID OF EX WB
I4 MUL R10, R11, R12 IF ID OF EX WB
I5 MUL R3, R14, R15 IF ID OF OF EX WB
I6 MUL R2, R5, R2 IF ID OF EX WB

Table 9: Pipelined Execution

I If a super scalar processor with ’K’ scalar pipelines (SSPK ) then


performance would be : K (Instructions per Cycle) = K IPC.
Dr. Praveen (Mahindra University) MPI CS3106 54 / 61
Simultaneous Multi-threaded Architecture

I Super Scalar Processors exploit instruction level parallelism.


I SMAs/SMTs exploit task level parallelism.

Dr. Praveen (Mahindra University) MPI CS3106 55 / 61


Do We Need to Wait for Free Lunch ?

I Choosing the processors run at high frequency


I Improvements in cache and main memories
I Using compiler optimizations
I Execution optimizations:
I Hardware circuit design
I Scalar pipeline,
I Super-scalar pipeline
I Pre-fetching

Dr. Praveen (Mahindra University) MPI CS3106 56 / 61


Out-of-Order Execution

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD (R1), R2, R3 IF ID OF MEM MEM MEM MEM MEM EX WB
I2 SUB R4,R5,R6 IF ID OF EX WB

Table 10: Pipelined In-Order Execution

I.No. Instruction. 1 2 3 4 5 6 7 8 9 10 11
I1 ADD (R1), R2, R3 IF ID OF MEM MEM MEM MEM MEM EX WB
I2 SUB R4,R5,R6 IF ID OF EX WB

Table 11: Pipelined Out-of-Order Execution

Dr. Praveen (Mahindra University) MPI CS3106 57 / 61


Instruction Set Architectures: CISC vs RISC

CISC: Complex Instruction Set Computing


RISC: Reduced Instruction Set Computing

CISC RISC
Only load and store use memory operands
Any instruction can use memory operands
(all other instructions use register operands)
Many addressing modes Few addressing modes
Complex Instruction Formats: Variable Length Simple Instruction Formats: Fixed length
Control Unit: Micro-programmed Control Unit: Hardwired
Difficult to implement pipelined execution Suitable for pipelining

Table 12: CISC-RISC Comparison

Back to slide:16

Dr. Praveen (Mahindra University) MPI CS3106 58 / 61


Sample C Code
// Example1: Addition (eg1.c) // Example2: Addition (eg2.c)
#define A 10 #include<stdio.h>
#define B 20 #define A 10
int main(){ #define B 20
int a=A; int main(){
int b=B; int a=A;
a=a+b; int b=B;
return a; a=a+b;
} printf(”Result is %d”,a);
return a;
}
Table 13: Addition of Two Integers Table 14: Addition of Two Integers

gcc -E eg1.c > eg1.i


gcc -S eg1.i > eg1.s
as eg1.s -o eg1.o
gcc -o eg1.exe eg1.o
./eg1.exe
Dr. Praveen (Mahindra University) MPI CS3106 59 / 61
Back:16

Dr. Praveen (Mahindra University) MPI CS3106 60 / 61


Stack Frame
int a(){b(); c(); return 0;}
int b(){ return 0; } int c(){ return 0; }
int main(){a(); return 0;}

Back to slide 20
Dr. Praveen (Mahindra University) MPI CS3106 61 / 61

You might also like