Professional Documents
Culture Documents
COMPUTER ORGANIZATION
AND ARCHITECTURE
UNIT-5
1
Contents
• Parallelism: Need, types , applications and challenges
• Architecture of Parallel Systems-Flynn’s classification
• ARM Processor: The thumb instruction set
• Processor and CPU cores, Instruction Encoding format
• Memory load and Store instruction
• Basics of I/O operations.
• Case study: ARM 5 and ARM 7 Architecture
2
Parallelism
• Executing two or more operations at the same time is
known as parallelism.
• Parallel processing is a method to improve computer system
performance by executing two or more instructions
simultaneously
• A parallel computer is a set of processors that are able to
work cooperatively to solve a computational problem.
• Two or more ALUs in CPU can work concurrently to increase
throughput
• The system may have two or more processors operating
concurrently
3
Goals of parallelism
• To increase the computational speed (ie) to reduce
the amount of time that you need to wait for a
problem to be solved
• To increase throughput (ie) the amount of
processing that can be accomplished during a given
interval of time
• To improve the performance of the computer for a
given clock speed
• To solve bigger problems that might not fit in the
limited memory of a single CPU
4
Applications of Parallelism
• Numeric weather prediction
• Socio economics
• Finite element analysis
• Artificial intelligence and automation
• Genetic engineering
• Weapon research and defence
• Medical Applications
• Remote sensing applications
5
Applications of Parallelism
6
Types of parallelism
1. Hardware Parallelism
2. Software Parallelism
• Hardware Parallelism :
The main objective of hardware parallelism is to increase the processing speed.
Based on the hardware architecture, we can divide hardware parallelism into two types:
Processor parallelism and memory parallelism.
• Processor parallelism
Processor parallelism means that the computer architecture has multiple nodes, multiple
CPUs or multiple sockets, multiple cores, and multiple threads.
• Memory parallelism means shared memory, distributed memory, hybrid distributed
shared memory, multilevel pipelines, etc. Sometimes, it is also called a parallel random
access machine (PRAM). “It is an abstract model for parallel computation which assumes
that all the processors operate synchronously under a single clock and are able to
randomly access a large shared memory. In particular, a processor can execute an
arithmetic, logic, or memory access operation within a single clock cycle”. This is what we
call using overlapping or pipelining instructions to achieve parallelism.
7
Hardware Parallelism
• One way to characterize the parallelism in a processor is by the
number of instruction issues per machine cycle.
• If a processor issues k instructions per machine cycle, then it is
called a k-issue processor.
• In a modern processor, two or more instructions can be issued per
machine cycle.
• A conventional processor takes one or more machine cycles to
issue a single instruction. These types of processors are called one-
issue machines, with a single instruction pipeline in the processor.
• A multiprocessor system which built n k-issue processors should be
able to handle a maximum of nk threads of instructions
simultaneously
8
Software Parallelism
• It is defined by the control and data dependence of
programs.
• The degree of parallelism is revealed in the program flow
graph.
• Software parallelism is a function of algorithm,
programming style, and compiler optimization.
• The program flow graph displays the patterns of
simultaneously executable operations.
• Parallelism in a program varies during the execution
period .
• It limits the sustained performance of the processor.
9
10
11
12
13
14
15
Software Parallelism - types
Parallelism in Software
Instruction level parallelism
Task-level parallelism
Data parallelism
Transaction level parallelism
16
Instruction level parallelism
• Instruction level Parallelism (ILP) is a measure of
how many operations can be performed in parallel
at the same time in a computer.
20
DLP - example
• Let us assume we want to sum all the
elements of the given array of size n and the
time for a single addition operation is Ta time
units.
• In the case of sequential execution, the time
taken by the process will be n*Ta time unit
• if we execute this job as a data parallel job on
4 processors the time taken would reduce to
(n/4)*Ta + merging overhead time units.
21
DLP in Adding elements of array
22
DLP in matrix multiplication
25
Flynn’s Classification
• This taxonomy distinguishes multi-processor computer
architectures according to the two independent dimensions of
Instruction stream and Data stream.
• An instruction stream is sequence of instructions executed by
machine.
• A data stream is a sequence of data including input, partial or
temporary results used by instruction stream.
• Each of these dimensions can have only one of two possible
states: Single or Multiple.
• Flynn’s classification depends on the distinction between the
performance of control unit and the data processing unit
rather than its operational and structural interconnections.
26
Flynn’s Classification
• Four category of Flynn classification
27
SISD
• They are also called scalar • SISD computer having
processor i.e., one instruction at
a time and each instruction have one control unit, one
only one set of operands. processor unit and
• Single instruction: only one single memory unit.
instruction stream is being acted
on by the CPU during any one •
clock cycle.
• Single data: only one data stream
is being used as input during any
one clock cycle.
• Deterministic execution.
• Instructions are executed
sequentially.
28
SIMD
• A type of parallel computer. • single instruction is
• Single instruction: All processing
units execute the same
executed by different
instruction issued by the control processing unit on
unit at any given clock cycle . different set of data
• Multiple data: Each processing
unit can operate on a different
data element as shown if figure
below the processor are
connected to shared memory or
interconnection network
providing multiple data to
processing unit
29
MISD
• A single data stream is fed • same data flow
into multiple processing units.
through a linear array
• Each processing unit operates
on the data independently via
of processors executing
independent instruction. different instruction
• A single data stream is streams
forwarded to different
processing unit which are
connected to different control
unit and execute instruction
given to it by control unit to
which it is attached.
30
MIMD
• Multiple Instruction: every • Different processor
processor may be executing each processing
a different instruction
stream.
different task.
• Multiple Data: every
processor may be working
with a different data stream.
• Execution can be
synchronous or
asynchronous, deterministic
or nondeterministic
31
32
33
ARM Features Contd…
34
35
Thumb instruction set (T variant) Contd….
36
ARM Core dataflow model
37
38
39
40
41
42
43
Single-core computer
44
Single-core CPU chip
the single core
45
Multi-core architectures
• Replicate multiple processor cores
on a single die.
Core 1 Core 2 Core 3
Core 4
c c c c
o o o o
r r r r
e e e e
1 2 3 4
47
The cores run in parallel
thread 1 thread 2 thread 3 thread 4
c c c c
o o o o
r r r r
e e e e
1 2 3 4
48
Within each core, threads are time-sliced (just
like on a uniprocessor)
several several several several
c c c c
o o o o
r r r r
e e e e
1 2 3 4
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
Difference between Memory mapped
I/O and I/O mapped I/O
Memory Mapped Input/Output Input/Output Mapped Input/Output
1. Each port is treated as a memory Each port is treated as an independent unit.
location.
2. CPU’s memory address space is Separate address spaces for memory and
divided between memory and input/output ports.
input/output ports.
3. Single instruction can transfer Two instruction are necessary to transfer
data between memory and port. data between memory and port.
4. Data transfer is by means of Each port can be accessed by means of IN
instruction like MOVE. or OUT instructions.
Program Controlled I/O
◼ Program controlled I/O is one in which the processor repeatedly checks a status flag to achieve the
required synchronization between processor & I/O device.
◼ The processor polls the device.
◼ It is useful in small low speed systems where hardware cost must be minimized.
◼ It requires that all input/output operators be executed under the direct control of the CPU.
◼ The transfer is between CPU registers(accumulator) and a buffer register connected to the
input/output device.
◼ The i/o device does not have direct access to main memory.
◼ A data transfer from an input/output device to main memory requires the execution of
several instructions by the CPU, including an input instruction to transfer a word from the
input/output device to the CPU and a store instruction to transfer a word from CPU to main
memory.
◼ One or more additional instructions may be needed for address communication and data
word counting.
Typical Program controlled instructions
Name Mnemonic
Branch BR
Jump JMP
Skip SKP
Call CALL
Return RET
Compare CMP
Test(by ADDing) TST
Case study: ARM 5 and ARM 7 Architecture
Data Sizes and Instruction Sets
cpsr
spsr spsr spsr spsr spsr
Register Organization Summary
r0
r1
User
r2 mode
r3 r0-r7,
r4 r15, User User User User
and mode mode mode mode Thumb state
r5
cpsr r0-r12, r0-r12, r0-r12, r0-r12, Low registers
r6
r15, r15, r15, r15,
r7 and and and and
r8 r8 cpsr cpsr cpsr cpsr
r9 r9
r10 r10 Thumb state
r11 r11 High registers
r12 r12
r13 (sp) r13 (sp) r13 (sp) r13 (sp) r13 (sp) r13 (sp)
r14 (lr) r14 (lr) r14 (lr) r14 (lr) r14 (lr) r14 (lr)
r15 (pc)
cpsr
spsr spsr spsr spsr spsr
N Z C V Q J U n d e f i n e d I F T mode
f s x c
• Condition code flags • Interrupt Disable bits.
• N = Negative result from ALU • I = 1: Disables the IRQ.
• Z = Zero result from ALU • F = 1: Disables the FIQ.
• C = ALU operation Carried out
• V = ALU operation oVerflowed • T Bit
• Architecture xT only
• Sticky Overflow flag - Q flag
• T = 0: Processor in ARM state
• Architecture 5TE/J only
• T = 1: Processor in Thumb state
• Indicates if saturation has occurred
• Mode bits
• Specify the processor mode
• J bit
• Architecture 5TEJ only
• J = 1: Processor in Jazelle state
Program Counter (r15)
CMP r0,#0
MOVEQ r0,#1
BLEQ func
CMP r0,#0
MOVEQ r1,#0
MOVGT r1,#1
CMP r0,#4
CMPNE r0,#10
MOVEQ r1,#0
Branch instructions
• Branch : B{<cond>} label
• Branch with Link : BL{<cond>} subroutine_label
31 28 27 25 24 23 0
Cond 1 0 1 L Offset
• The processor core shifts the offset field left by 2 positions, sign-extends
it and adds it to the PC
– ± 32 Mbyte range
– How to perform longer branches?
Data processing Instructions
• Consist of :
– Arithmetic: ADD ADC SUB SBC RSB RSC
• Syntax:
<Operation>{<cond>}{S} Rd, Rn, Operand2
CF Destination 0 Destination CF
Destination CF
Immediate value
– 8 bit number, with a range of 0-255.
• Rotated right through even number
of positions
ALU
– Allows increased range of 32-bit
constants to be loaded directly into
registers
Result
Immediate constants
• Examples: 31 0
ror #0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 range 0-0x000000ff step 0x00000001
• Cycle time
– Basic MUL instruction
• 2-5 cycles on ARM7TDMI
• 1-3 cycles on StrongARM/XScale
• 2 cycles on ARM9E/ARM102xE
– +1 cycle for ARM9TDMI (over ARM7TDMI)
– +1 cycle for accumulate (not on 9E though result delay is one cycle longer)
– +1 cycle for “long”
• Above are “general rules” - refer to the TRM for the core you are using
for the exact details
Single register data transfer
LDR STR Word
LDRB STRB Byte
LDRH STRH Halfword
LDRSB Signed byte load
LDRSH Signed halfword load
• Syntax:
– LDR{<cond>}{<size>} Rd, <address>
– STR{<cond>}{<size>} Rd, <address>
e.g. LDREQB
Address accessed
Condition Field
• Causes an exception trap to the SWI hardware vector
• The SWI handler can examine the SWI number to
decide what operation has been requested.
• By using the SWI mechanism, an operating system
can implement a set of privileged operations which
applications running in user mode can request.
• Syntax:
– SWI{<cond>} <SWI number>
PSR Transfer Instructions
31 28 27 24 23 16 15 8 7 6 5 4 0
N Z C V Q J U n d e f i n e d I F T mode
f s x c
where
– <psr> = CPSR or SPSR
– [_fields] = any combination of ‘fsxc’
• In User Mode, all bits can be read but only the condition flags
(_f) can be written.
ARM Branches and Subroutines
• B <label>
– PC relative. ±32 Mbyte range.
• BL <subroutine>
– Stores return address in LR
– Returning implemented by restoring the PC from LR
– For non-leaf functions, LR will have to be stacked
func1 func2
STMFD sp!, :
: {regs,lr}
:
: :
:
BL func1 BL func2
:
: :
:
: LDMFD sp!,
{regs,pc} MOV pc, lr
Thumb
15 0
Inline barrel shifter not used
ADD r2,#1
16-bit Thumb Instruction
Example ARM-based System
Interrupt
Controller
Peripherals I/O
nIRQ nFIQ
ARM
Core
8 bit ROM
AMBA
Arbiter Reset
ARM
TIC
Remap/
External Bus Interface Timer
Pause
ROM External
Bridge
Bus
Interface
External
RAM On-chip Interrupt
Decoder RAM Controller
• AMBA • ACT
– Advanced Microcontroller Bus – AMBA Compliance Testbench
Architecture
• • PrimeCell
ADK
– ARM’s AMBA compliant peripherals
– Complete AMBA Design Kit