You are on page 1of 105

Computer Organization

Computer Abstractions and Technology


Dr. A. Ananth
ananth@iiitkottayam.ac.in
Introduction
• Evolution of Computers – Make impossible to possible/ practical

• Computers led third revolution and used in all areas

• Computers in automobiles
Limited features with microprocessors/ micro controllers
Upgradation in all possible ways
Front and rear parking sensors
Camera and display system with a touch
Blind spot warning
Cruise control, Gesture and voice controls
Driver drowsiness
Autonomous driving
Introduction
• Cell phones
Bigger sized phones to small sizes
Dial pad to touch screen
Data rate improvements
Communication everywhere
Integration of all facilities

• Human genome project


Cost of computing/ analysing human genome – millions of dollars
Made simpler with advanced computing
Introduction
• World wide web
replaced libraries, newspapers
Everything can be found here

• Search engines
Learning starts here
Information are plenty finding the right one is a bigger task
Everyone relies on this

Computers are omnipresent


Improvisation of computer technology helps human beings to make life easier
Todays Science fiction  tomorrows killer application
Classes of Computing Applications
• Computers  Hardware and software

• Applications in smart home appliances to cell phones and super computers

• Each application require different hardware and software specifications

• Three different classes of applications


Personal computers (PCs)
Servers
Embedded Computers
Classes of Computing Applications
Personal Computers
Designed to be used by an individual
Provides good performance to single user with low cost
Laptops, conventional computer

Servers
Modern form of olden day larger computers
Carry out large work loads (engineering application and scientific
computing)
Huge memory, greater computing facility
Can be accessed by many people via network
Classes of Computing Applications
Servers
Ranges in wide variety based on the capability and cost
Low range servers
More than a desktop computer with no screen and keyboard
Used for file storage, web browsing and small enterprise applications
Costs around thousand dollars
High end servers  super computers
Has tens or thousands of processor with peak computing capacity
Large memory (terabytes 240 bytes/ 1012 bytes)
Used for high-end scientific and engineering computation
Weather forecasting, Oil exploration, large scale problems
Costs around hundred million dollars
Classes of Computing Applications
Embedded Computers
Largest class of computer used
Wide range of applications and performance
Application specific/ designed to run one or one set of related application
In practice, Users never realize the usage of embedded computers
Microprocessors used in car, music player
Computer in TV
Set of processors used in Cargo ship, Aeroplane
Features
Lower tolerance for failure (cargo ship and aeroplane)
Limitations on costs and power (music player)
Limited functionality
Full Range of Decimal and Binary Values

Gib – gibibits, Gb – gigabits


Eight Great Ideas in Computer Architecture
Moore’s Law  Gordon Moore – One of the founders of Intel (1965)
• Predicted that number of transistors doubles every 18-24 months
• This helps in rapid growth or change in computer

Use of abstraction to simplify design


• To simplify the programmers and architects jobs and to make more
productive to reduce the design time
• Abstractions are used to represent design at different levels
• Lower level details are hidden to offer simpler model
Eight Great Ideas in Computer Architecture
Make the common case fast
• Enhance the performance rather than optimising the rare case
• Identify the common case by experimentation and measurement

Performance via Parallelism


• To improve the performance of computing the operations are performed in
parallel

Performance via Pipelining


• A type of parallelism to improve the performance is pipelining
Eight Great Ideas in Computer Architecture
Performance via prediction
• The best idea to improve the performance is by prediction rather than
reacting to the consequences

Hierarchy of Memory
• Memory speed influences the performance
• Programmers want memory to be fast, cheap and large
• Larger the size  big problems can be solved

Dependability via Redundancy


• Redundant components to address in case of failure
Below your Program

• Applications like database systems/ word processor

• Require millions of lines of code and software libraries to execute the


complex function

• Computer can execute only simple low-level instructions

• To execute complex function using simple instructions require several layers


of software to interpret and translate high level code
Below your Program
Software are layered as
Application software
System software
System Software  Sits between application and hardware
Operating systems  Linux, Windows, iOS
Compiler, Assemblers
Operating Systems  interface between hardware and application(user
program)
Handles input and output operations
Organizes the memory
Shares resource of computer to all applications running on it
Below your Program
Compilers
Programs that convert high level language to machine level code
Converts program written in C, C++, Java, to hardware instructions
Task is complex, user codes are high level and machine code are simplex
Translates high level language program to assembly language

How High level language converted to language of hardware?


Hardware is a electronic component
Electronic component can understand only electrical signals
Simplest signal to understand is ON/ 1 or OFF/ 0
1 and 0 are binary digits/ bits
Below your Program
Instructions
Set of commands that computer understand and obey
Collection of binary numbers
Machine language

Assemblers
Converts assembly level language to machine level language/ binary
Converts symbolic notation to binary
Symbolic language  Assembly language
Below your Program
Benefits of High Level Language
• Programmer can think in natural language (English, Algebraic expressions)
Easy for anyone understand

• Languages for intended use


Fortran  Scientific computation
Cobol  Business data processing

• Improved programmer productivity


Less time and no. of lines to create a program (Conciseness)

• Independent of the computers


Hardware
Basic Components
Input devices (Mouse, Keyboard)
Output devices (Monitor, Printer)
Processor  Datapath and Control(CPU)
Storage device (Memory)
Hardware
I/O Device
Feeds the input to the computer
To display the result of the computation

Processor
Control  component that controls datapath, memory, I/O device
Datapath performs the arithmetic operation

Memory
Place where data/ programs are stored
Hardware
Memory
• Built from dynamic random access memory (DRAM) chip

• Chip  Integrated Circuits

• Many DRAMs are used to store data and program

• Unlike sequential access memories, DRAMs take same amount of access any part of
memory

• Cache memory (SRAM)is kept in processor

• SRAM – static random access memory, less size but fast access, buffer to DRAM
Hardware
Volatile memory
Loses the data when power goes off
Hold data/ program while running
Primary memory or main memory
DRAM/ SRAM
Non-volatile memory
Does not lose data when power goes off
Secondary memory
Magnetic disks dominated and replaced by flash memories in mobile
devices
Slower and cheaper than SRAM
Performance
• Quite difficult to assess the performance of a computer

• Modern software systems and wide range of performance improvement in


hardware

• Performance is a measure to choose a good computer from various ones

• Accurate measure of performance and comparison is very important


Performance
• It is very difficult to measure the performance

• Performance of different planes in terms of speed, capacity, distance

• Speed  BAC/Sud Concorde, Boeing 777/747, Douglas DC-8-50


• Capacity  Boeing 747, Boeing 747, Douglas DC-8-50, BAC/Sud Concorde
• Distance  Douglas DC-8-50, Boeing 747, Boeing 747, BAC/Sud Concorde
Performance
Response Time/ Execution Time
• Total time required to complete a task from start to end (disk/ memory
access, I/O activities, OS overhead, CPU execution)

Throughput/ Bandwidth
• Measure of performance
• Amount of tasks completed per unit time

• Performance measure differ for different systems


• Response time/ execution time  PC
• Throughput/ bandwidth  Servers/ Datacenters
Performance

Case I: Does replacing a new processor with a faster version improves the
performance?

Case II: Does adding a processor to a computer that supports multiple


processors improves the performance?
Performance

Case I: Does replacing a new processor with a faster version improves the
performance?

Improves both response time and throughput

Case II: Does adding a processor to a computer that supports multiple


processors improves the performance?

Improves only the throughput


Performance
• Case II may not improve the response or execution time.

• If the number of requests to process is large, increasing the throughput will


reduce the execution time, where the waiting time can be reduced

• Changing throughput or execution time will alter the other

• The performance of a basic computer can be represented in terms of


execution time

• For a good computer, performance should be large and execution time be


small
Performance
• Two computers X and Y

• If performance of computer X is better than computer Y

• Execution time on computer Y is greater than computer X

• Computer X is faster than computer Y


Performance
• In design, the performance of two computers can be related

• Computer X performance is ‘n’ times faster than computer Y

• The performance and execution time of computers X and Y can be related by


Performance
• Computer A runs the program in 10 seconds, and computer B runs the same
program in 15 seconds. Determine how fast is computer A than computer B.
Performance
Computer A runs the program in 10 seconds, and computer B runs the same
program in 15 seconds. Determine how fast is computer A than computer B.

• From the execution time, it is known that computer A is faster than computer
B

• The performance ratio is

• Computer A is faster than computer B by 1.5 times/ computer B is 1.5 times


slower than computer A
Performance
• Computer B is slower than computer A by 1.5 times

• In terms of performance

• Large performance  small execution time (Performance and execution


time are inversely proportional)

• Improved performance/ improved execution time  increase performance/


decrease execution time
Measuring Performance
• Time is the measure of computer performance

• Program execution time is measured in seconds per program

• CPU Time/ CPU Execution Time  time spent on computing the task (does
not include the waiting time for I/O data)

• User CPU Time  CPU time spend in program itself

• System CPU Time  CPU time spent by OS doing some task related to the
program
Measuring Performance
• Response Time/ Elapsed Time or Wall Clock Time  total time to complete a
task

CPU Time

CPU Time = User CPU time + System CPU time

Elapsed Time

Elapsed time = CPU time + Wait time


= User CPU time + System CPU time + Wait time
Measuring Performance

• It is difficult to differentiate the user CPU time and system CPU time

• Separate performance based on elapsed time and CPU execution time

• System performance based on elapsed time

• CPU performance based on CPU execution time


Measuring Performance

• There are several performance metrics

• Different applications are sensitive to different performance metrics

• Total elapsed time is the performance metric of interest

• To improve performance, the bottleneck for the program and the


performance metric should be identified
Measuring Performance
Clock Cycle

• All the computer are designed based on the clock

• Clock cycle/ tick/ clock tick/ clock period/ clock/ cycle

• Time for one clock period  processor clock which runs at a constant rate

• Clock period  The length of each clock cycle/ time for a complete clock
cycle (250 ps)

• Inverse of clock period is clock rate (4GHz)


Measuring Performance
CPU Performance and its Factors

• CPU performance  CPU Execution time

• Formula relating the clock cycle and CPU execution time

• Formula relating the clock rate and CPU execution time


Measuring Performance
CPU Performance and its Factor

• The performance of the program can be improved by

• Reducing the number of clock cycle required to complete the program

• Reducing the clock cycle period

• Trade-off between clock period and number of clock cycle

•  in clock period will  number of clock cycle required


Measuring Performance
A program runs in computer A for 10 seconds which has 2 GHz clock. How will
you design a computer B that can execute the same program in 6 seconds. If
designer is asked to change the clock cycle in computer B, but this increase in
clock cycle affects the rest of CPU design that it requires the 1.2 times as many
clock cycles than computer A. What should be the clock rate designed for
computer B?
Measuring Performance
A program runs in computer A for 10 seconds which has 2 GHz clock. How will
you design a computer B that can execute the same program in 6 seconds. If
designer is asked to change the clock cycle in computer B, but this increase in
clock cycle affects the rest of CPU design that it requires the 1.2 times as many
clock cycles than computer A. What should be the clock rate designed for
computer B?

• Calculate the number of clock cycle required by Computer A

• CPU Time of computer A = Number of clock cycles/ (Cycles/ Second)

• CPU Time of computer B = 1.2 x Number of clock cycles of Computer A/


(Cycles/ Second)
Measuring Performance
A program runs in computer A for 10 seconds which has 2 GHz clock. How will
you design a computer B that can execute the same program in 6 seconds. If
designer is asked to change the clock cycle in computer B, but this increase in
clock cycle affects the rest of CPU design that it requires the 1.2 times as many
clock cycles than computer A. What should be the clock rate designed for
computer B?
Measuring Performance
A program runs in computer A for 10 seconds which has 2 GHz clock. How will
you design a computer B that can execute the same program in 6 seconds. If
designer is asked to change the clock cycle in computer B, but this increase in
clock cycle affects the rest of CPU design that it requires the 1.2 times as many
clock cycles than computer A. What should be the clock rate designed for
computer B?
Measuring Performance
Instruction Performance

• Performance of the computer should depend upon the number of instructions


present in a program

• Hence the execution time can be represented based on the number of


instructions and average time required to execute an instruction

• Clock cycles per instruction is denoted as CPI


Measuring Performance
Instruction Performance

• CPI differs for different instructions

• It depends on the kind of operation an instruction does

• CPI is an average of all the instructions executed in the program

• By CPI, the processor using same instruction set architecture are compared
Measuring Performance
Consider Computer A has a clock of 250 ps and 2.0 as CPI for some program. If
another computer B with clock of 500 ps and 1.2 CPI for the same program.
Which is the computer that performs faster?. Assume both computer uses
same instruction set architecture.
Measuring Performance
Consider Computer A has a clock of 250 ps and 2.0 as CPI for some program. If
another computer B with clock of 500 ps and 1.2 CPI for the same program.
Which is the computer that performs faster?. Assume both computer uses
same instruction set architecture.

CPU time = CPU clock cycles x Clock time

CPU clock cyclesA = I x 2.0


CPU clock cyclesB = I x 1.2
Measuring Performance
• If the program has N number of instructions, the performance of a computer
based on the number of instructions and clock cycle can be expressed as

• Performance of a computer based on the clock rate is

• Performance is based on three separate key factors


Measuring Performance
A compiler designer is trying to decide between two code sequence for a
computer. He is supplied with the following data from the hardware designer.

Which code sequence will execute faster? Which code sequence executes
more instructions? What is the CPI for each sequence?
Measuring Performance
No of instructions executed:

Code sequence A = 2 + 1 + 2 = 5 Code sequence B = 4 + 1 + 1 = 6

Code sequence B executes more instructions

Number of clock cycles required:

CPU clock cycle 1 = (2 x 1) + (1 x 2) + (2 x 3) = 10 cycles


CPU clock cycle 2 = (4 x 1) + (1 x 2) + (1 x 3) = 9 cycles
Code sequence 2 executes faster than code sequence 1 (fewer clock cycles)
Measuring Performance
Clocks per instruction:
Measuring Performance
Performance measures and its units

How different parameters are used to find the execution time?


Measuring Performance
• CPU execution time can be easily identified by running the program

• CPI and instruction count may be difficult to find

• Knowing the CPU execution time and clock rate, either CPI or instruction
count is sufficient to find the other

• Instruction count can be found using software

• No. of instructions and average CPI can be found using hardware counter

• CPI depends on the instruction types, processor, memory


Measuring Performance
Performance of the program
• Algorithm
• Language
• Compiler
• Architecture
• Actual hardware
Measuring Performance
• Some processors fetch, decode and execute multiple instructions per clock
cycle

• Instead of clock per instructions (CPI), instructions per clock cycle (IPC) are
developed

• If processor executes 2 instructions per clock cycle, IPC = 2.0 and CPI = 0.5

• Turbo mode in Core i7  Clock rate increases by 10% until it becomes too
warm
Fallacies and Pitfalls
• Fallacies  Mistakes Pitfalls  Obstacles

• Pitfalls based on common case fast

• Consider a program that requires 100 seconds to execute, in which 80


seconds are used for the computation of multiplication. Now we want to
improve the performance by 5 fold such that by what factor the
multiplication computation should be reduced.

• Execution time after making improvement is given by Amdahl law


Fallacies and Pitfalls
• Amdahl Law is widely used to study the performance improvement when the
time consumed by the function and by what factor the speed should be
increased

Fallacy:

• Designing for energy efficiency and performance are unrealistic

• Energy is power over time


Fallacies and Pitfalls
Performance measure
• Clock rate, CPI, Number of instructions

• Performance can be compared by three or two metrics

• An alternative to time is (MIPS) Million Instructions Per Second

• MIPS is instructions execution rate (Inverse of time)  large value good


performance
Instruction and Instruction Set
• Set of commands that computer understand and obey

• Words of a computer language

• Vocabulary of commands understood by computer/ architecture

• Stored program concept  Data and instructions are stored in memory as


numbers

• Different computers have different instruction set

• MIPS instruction set is used as an example


Operations of the Computer Hardware
Arithmetic Operations

• Most of the computers have the ability to perform arithmetic operation

• MIPS instruction perform one task

• MIPS assembly language notation


add a, b, c a=b+c

• To add four data


Operations of the Computer Hardware
• Addition operation in MIPS requires three operands

• Each arithmetic instruction has three operands (2 sources and 1 destination)

• Destination first

• Design of hardware is simple

• Hardware complexity will be high, if the operands on the instructions are


variable

• Design Principle 1: Simplicity favors regularity


Operations of the Computer Hardware
MIPS Assembly Language
Operations of the Computer Hardware
MIPS Assembly Language
Operations of the Computer Hardware
Operands

• Object on which operation is performed

• Registers and memory of MIPS

• MIPS has 32 x 32 bit registers


Operations of the Computer Hardware
Register Operands

1 - $at reserved for assembler


Name Register No. Function/ Usage
$zero 0 Constant zero value
26 – 27
$v0 - $v1 2–3 Value for results and expression evaluation
reserved for OS $a0 - $a3 4–7 Arguments
$t0 - $t7 8 – 15 Temporaries
$s0 - $s7 16 – 23 Saved
$t8 - $t9 24 – 25 More temporaries
$gp 28 Global Pointer
$sp 29 Stack Pointer
$fp 30 Frame pointer
$ra 31 Return Address
Operands of the Computer Hardware
• The operations performed on the data should be from special locations

• Special locations  registers placed in hardware directly for fast accessing

• Registers are visible to the programmer

• The size of the register is 32 bits

• Group of 32 bits in MIPS is called word

• Number of registers in MIPS architecture is 32


Operands of the Computer Hardware
• Three operands for the arithmetic operation in MIPS should be any of the 32
registers

• Why only 32 registers and not more than that?

• Design principle 2: Smaller is faster

• Large registers require more accessing time

• Designers should use minimal number of registers for fast clock cycle
Operands of the Computer Hardware
Compiling a C assignment using MIPS registers

• Variables f, g, h, I, j can be assigned to registers s0, s1, s2, s3, s4

• t0 and t1  temporary registers


Operands of the Computer Hardware
• Program has simple variables and complex data structures like array

• Array handles large number of data than the size of registers

• Arrays are stored in memory

• Registers store limited number/ amount of data

• Memory store billions or large amount of data


Operands of the Computer Hardware
Memory Operands

• Arithmetic operations are performed among data present in register

• Data from memory transferred to register before execution and register to memory
after execution

• Data transfer instructions  Instructions that move/ transfer data between


memory and registers

• To access data from memory for computation, instruction should specify the
address from which data is to be fetched

• Address  Location where data is stored in memory array


Operands of the Computer Hardware
Memory Operands

• Memory is a large single dimensional array

• Address act as the index to the memory which starts with 0

• Each memory is byte addressed  memory stores 8 bit (1 byte)

• 232 bytes and byte addressing  0 , 1, …, 232-1

• 230 words and byte addressing  0, 4, …, 232-4


Operands of the Computer Hardware
Memory Operands

• MIPS is Big Endian

• To store 01234567

• Moving data from memory to register  load (lw)

• Moving data from register to memory  store (sw)

https://www.geeksforgeeks.org/little-and-big-endian-mystery/
Operands of the Computer Hardware
Memory Operands

• lw followed by register name, constant/ offset and base register

• Address is calculated by the sum of the constant and content of the register

• C assignment statement:

• MIPS instruction:
lw $to, 32 ($s3) Index 8 requires offset value 32
add $s1, $s2, $t0
Operands of the Computer Hardware
Memory Operands

• sw followed by register name, constant/ offset and base register

• All address of a word starts with the multiples of 4

• Alignment restriction

• To get proper data, offset value gets added with the


content of base register
Operands of the Computer Hardware
Memory Operands

• Compile using load and store

• add 48($s3), $s2, 32($s3)

• direct access of memory for arithmetic operation not possible


Operands of the Computer Hardware

• Data accessed from register are faster than from the memory

• Registers are small in size than the memory

• Registers have high throughput than the memory

• Arithmetic instructions read two register operate and store it

• Data transfer instruction read only one operand and no operation is


performed
Operands of the Computer Hardware

• Spilling registers  most commonly used variables in registers and least used
variables in memory

• Register optimization is important

• Compilers use registers to map variables

• For good performance, sufficient registers are needed and compilers used
efficiently
Operands of the Computer Hardware
Constant or Immediate Operands
• Constants occur frequently in instructions

A=A+5
B = B – 10
C = A + B - 10

• Operation with one constant or immediate operand


Operands of the Computer Hardware
• Quick add operation with constant operand (add immediate (addi))

• Immediate operand instructions perform task faster without accessing


memory

• Common case fast (register zero  hardwired to zero value)


Cannot be overwritten

• No subtract immediate instructions (subi)


Use addi $s1, $s2, -5
Representing Instructions
• Instructions are represented as numbers

• Instructions  Opcode and operands are represented as numbers and kept


side by side Name Register No. Function/ Usage
$zero 0 Constant zero value
$v0 - $v1 2–3 Value for results and expression
• Registers are mapped as numbers evaluation
$a0 - $a3 4–7 Arguments
$t0 - $t7 8 – 15 Temporaries
• $s0  16, $s1  17, … $s0 - $s7 16 – 23 Saved
$t8 - $t9 24 – 25 More temporaries
$gp 28 Global Pointer
• $t0  8, $t1  9 ,… $sp 29 Stack Pointer
$fp 30 Frame pointer
$ra 31 Return Address
Representing Instructions
MIPS Assembly Language to Machine Instructions

• Addition instruction

• Machine instruction  sequence of binary numbers used for communication


with computer

• Decimal representation

• Field  segment of an instruction


Representing Instructions
MIPS Assembly Language to Machine Instructions

• The first and last field  indicates opcode/ kind of instruction (0 and 32 
add)

• Second field  number of first source register (17  $s1)

• Third field  number of second source register (18  $s2)

• Fourth field  number of destination register (8  $t0)


Representing Instructions
MIPS Assembly Language to Machine Instructions

• Fifth field  Not used (set to zero)

• Instruction format  composition of fields in binary form representing


binary numbers

• All MIPS instructions are represented by 32 bits (word)


Representing Instructions
MIPS Assembly Language to Machine Instructions

• Sequence of machine instruction/ machine language  machine code

• Hexadecimal numbers replace long binary number string


Representing Instructions
MIPS Fields

op  basic operation of the instruction (opcode)

rs  first register, rt  second register, rd  destination register

shamt  shift amount (used for shift operation)

funct  function code (specific variant of the operation in op field)


Representing Instructions
• All the instructions in MIPS are of same length

• Design Principle 3: Good design demands good compromises

• Different kinds of instructions have different instruction formats

Types of instruction format

• R – type (register)
• I – type (immediate and data transfer instructions)
• J – type (jump)
Representing Instructions
I – type instructions

• 16 bit address  load only ± 215 or 32768 bytes address

• Data  213 or 8192 words

• If more than 32 registers, 6 bits will be required


Representing Instructions
Load word instruction

• $s3  19  rs field

• $t0  8  rt field (destination register)

• 32  address
Representing Instructions
• To avoid hardware complexity, instructions are kept similar across different
formats
R and I type has first three fields similar

• First field defines whether last half instructions as three separate fields (reg)
or as single field (immediate)
Representing Instructions
• Reg means 0 to 31

• Address  16 bit address

• Func  32 (add) and 34 (sub)


Representing Instructions
Translation of MIPS Assembly Language to Machine Instruction

Consider base of the array A is in $t1 register and h  $s2


Representing Instructions
MIPS Assembly Language to Machine Instructions

• Assembly language to decimal form

• Machine instructions
Representing Instructions
MIPS Machine Language

• First 16 bits are same in I and R – type instruction format


Logical Operations
• Operations performed on fields of bits in a word

• Shift  leaves zeros on the emptied bits

• Immediate operand for AND, OR operation


Logical Operations
• If $s0 contains 9 as the data, shift left by 4 gives

• sll instruction

• Result of sll instruction


Logical Operations
Shift operation

• sll  0, $t2  10, $s0  16

• rs field set to zero

• Shift left operation used in multiplication


Logical Operations
• Shift left by i bits  multiplying by 2i bits

• 9 shifted by 4  9 x 24 = 144

• AND operation  bit by bit operation

• AND operation  can make set of bits to be zero with a bit pattern as 0

• AND operation + bit pattern  mask


Logical Operations
AND operation

• Content of $t2

• Content of $t1

• Content of $t0 after AND operation


Logical Operations
OR operation

• Content of $t2

• Content of $t1

• Content of $t0 after OR operation


Logical Operations
NOR Operation (NOR  NOT OR)

• Content of $t3 all zeros

• Content of $t1

• Content of $t0 after NOR operation

• NOR operation replaces NOT operation


Making Decisions
• Computer differ from calculator with an ability to make decision

• Based on the conditions satisfied, different set of instructions can be


executed

• Commonly used decision making command


If, elseif, else

• MIPS has if command with goto as


Making Decisions
• beq instruction  Branch if equal

• Branch/ move to statement labelled as L1, if contents of register 1 and 2 are


equal

• bne instruction  branch if not equal

• beq, bne  conditional branches


Making Decisions
• Unconditional Branching

• Jump instruction
References

D. A. Pattersen and J. L. Hennesy, Computer Organisation and Design: The


Hardware/ Software Interface, Fourth Edition, Morgan Kaufman, 2009.

You might also like