You are on page 1of 34

CHAPTER 6

CONCURRENT PROCESSES

© 2018 Cengage. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed
with a certain product or service or otherwise on a password-protected website for classroom use.
LEARNING OBJECTIVES
After completing this chapter, you should be able to describe:
 The critical difference between processes and processors, and their
connection
 The significance of a critical sections (CS) in process synchronization
 Distributed mutual exclusion.
 How concurrent programming languages manage tasks.

2
LEARNING OBJECTIVES (CONT'D.)
 The need for process cooperation when several processes work
together
 How several processors, executing a single job, cooperate
 The similarities and differences between processes and threads
 The significance of concurrent programming languages and their
applications

3
WHAT IS PARALLEL PROCESSING?
(1 OF 6)
Parallel processing
 Running a single program on multiple CPU cores
 Large problems can often be divided into smaller ones, which can then be
solved at the same time.
 Multiprocessing
 Two or more processors operate in unison
 Two or more CPUs execute instructions simultaneously
 Processor Manager
 Coordinates activity of each processor
 Synchronizes interaction among CPUs

4
WHAT IS PARALLEL PROCESSING?
(2 OF 6)
Parallel processing development
 Enhances throughput
 Increases computing power
Benefits
 Increased reliability
 More than one CPU
 If one processor fails, others take over
 Faster processing
 Instructions processed in parallel two or more at a time

5
WHAT IS PARALLEL PROCESSING?
(3 OF 6)
Faster instruction processing methods
 CPU allocated to each program or job

 CPU allocated to each working set or parts of it

 Individual instructions subdivided

6
WHAT IS PARALLEL PROCESSING?
(4 OF 6)

7
WHAT IS PARALLEL PROCESSING?
(5 OF 6)

(Table 6.1)
The six steps of the four-processor fast food lunch stop.
8
WHAT IS PARALLEL PROCESSING?
(6 OF 6)

9
EVOLUTION OF MULTIPROCESSORS
Today hardware costs reduced
 Multiprocessor systems available on all systems

Multiprocessing occurs at three levels


 Job level

 Process level

 Thread level

 Each requires different synchronization frequency

10
LEVELS OF MULTIPROCESSING (2 OF
2)

(Table 6.2)
Typical levels of parallelism and the required synchronization among processors.

11
INTRODUCTION TO MULTI-CORE
PROCESSORS
Multi-core processing
 Several processors placed on single chip
Problems
 Heat
Solution
 Single chip with two processor cores in same space
 Allows two sets of simultaneous calculations
 80 or more cores on single chip
 Two cores each run more slowly than single core chip

12
PROCESS SYNCHRONIZATION
SOFTWARE
Successful process synchronization
 Lock up used resource
 Protect from other processes until released
 Only when resource is released
 Waiting process is allowed to use resource

Mistakes in synchronization can result in:


 Starvation
 Leave job waiting indefinitely
 Deadlock
 If key resource is being used

13
PROCESS SYNCHRONIZATION
SOFTWARE (CONT'D.)
Critical sections
 Part of a program

 Critical region must complete execution

 Other processes must wait before accessing critical sections resources

 Processes within critical sections


o Cannot be interleaved

14
PROCESS SYNCHRONIZATION
SOFTWARE (CONT'D.)
Synchronization
 Implemented as lock-and-key arrangement:
 Process determines key availability
 Process obtains key
 Puts key in lock
 Makes it unavailable to other processes

15
SEMAPHORES- MUTUAL EXCLUSION
• P and V operations on semaphore s
– Enforce mutual exclusion concept
• Semaphore called mutex (MUTual EXclusion)
• P(mutex): if mutex > 0 then mutex: = mutex – 1
• V(mutex): mutex: = mutex + 1

• Critical region
– Ensures parallel processes modify shared data only while in critical
region
• Parallel computations
– Mutual exclusion explicitly stated and maintained

16
RING-BASED ALGORITHM
 Processes arranged in a logical ring

 Each process has a communication channel to the


next process in the ring.

 Token is passed from process to process in a single


direction

 If process that receives token does not want to enter


CS, it passes token to the next process. Otherwise, it
retains token until exiting CS.
17
DISTRIBUTED MUTUAL EXCLUSION
 Motivation – critical sections

 Model and requirements

 Evaluation criteria

 Central server algorithm

 Ring-based algorithm

18
MOTIVATION-CRITICAL SECTIONS
 Collection of processes share resources

 When accessing shared resources (critical section), must


ensure consistency and prevent interference.

 Need for distributed mutual exclusion

 Solution must be based solely on message-passing: cannot


use shared memory.

19
MODEL AND REQUIREMENTS
A system of N processes pi, I=1,2,…,N
Assumptions:
 Asynchronous system
 Processes do not fail
 Message delivery is reliable
Requirements:
 Safety: at most one process may execute CS at a time
(mutual exclusion)
 Liveliness: requests to enter/exit CS eventually succeed
(no deadlock, no starvation)
 Ordering: if one request to enter CS happened before
another, entry to CS is granted in that order.

20
EVALUATION CRITERIA

 Consumed bandwidth: proportional to number of


messages sent in each entry-CS and exit-CS operations

 Client delay: time spent at entry-CS and exit-CS


operations (worst-case)

 Synchronization delay: time between one process exiting


CS and another process entering CS

21
CENTRAL SERVER ALGORITHM
 A server grants permission to enter CS (via token)

 To enter CS: send request to server and wait until it


replies with a token.

 To exit CS: send token to server

 Server grants token if no process holds it, else queue the


request. A FCFS queue of requests is maintained by
server.

22
SERVER MANAGING A MUTUAL EXCLUSION
TOKEN FOR A SET OF PROCESSES
• Server
• Queue of
• requests
• 4

• 2
• 3. Grant
• token

• 1. Request
• token • 2. Release • p
• p • token • 4
• 1

• p • p
• 3
• 2

23
EVALUATION OF ALG ORITHM

 Enter CS takes two messages ( a request followed by a grant)

 Exit CS takes 1 release message

 Synchronization delay: time taken for a round trip; I.e. a release


message to the server followed by a grant message to the next
process to enter CS.
** token is a special series of bits that travels around a token-ring network. As the
token circulates, computers attached to the network can capture it. The token acts
like a ticket, enabling its owner to send a message across the network. There is only
one token for each network, so there is no possibility that two computers will
attempt to transmit messages at the same time.

24
A RING OF PROCESSES TRANSFERRING
A MUTUAL EXCLUSION TOKEN
p
1 p
2

pn

p
3

p
4

Token

25
EVALUATION OF ALGORITHM
 Bandwidth: Continuously consumed, except when a
process is in CS.

 Requesting process delay between 0 messages (when


it has just received the token) and N messages (when
it has just passed on the token)

 To exit CS requires 1 message.

 Synchronization delay between exit CS and next enter


CS is 1 to N message transmissions.
26
CONCURRENT PROGRAMMING

 Concurrent processing system


 One job uses several processors

 Executes sets of instructions in parallel

 Requires programming language and computer system support

27
APPLICATIONS OF CONCURRENT
PROGRAMMING
• Precedence of operations or rules of precedence
• Solving an equation–all arithmetic calculations performed from the
left and in the following order:
– Perform all calculations in parentheses
– Calculate all exponents
– Perform all multiplications and divisions: resolved from the left
– Perform all additions and subtractions: resolved from the left

28
APPLICATIONS OF CONCURRENT
PROGRAMMING

Z = 10 – A / B + C (D + E) ** (F – G)

(Figure 6.9)
Using three CPUs and
the COBEGIN
command, this six-step
equation can be
resolved in these three
steps.

29
APPLICATIONS OF CONCURRENT
PROGRAMMING

Z = 10 – A / B + C (D + E) ** (F – G)

(Table 6.5) COBEGIN


The sequential computation T1=A/B
of the expression requires T2= (D+E)
several steps. In this example, T3= (F-G)
there are six steps, but each COEND
step, such as the last one,
may involve more than one COBEGIN
machine operation. T4=10-T1
T5= T2 ** T3

COEND
Z= T4+ C(T5)

30
APPLICATIONS OF CONCURRENT
PROGRAMMING

A = 3 * B * C + 4 / (D + E) ** (F – G)

31
APPLICATIONS OF CONCURRENT
PROGRAMMING (CONT'D.)

A = 3 * B * C + 4 / (D + E) ** (F – G)

32
SUMMARY
 Multiprocessing
 Single-processor systems

 Interacting processes obtain control of CPU at different times

 Systems with two or more CPUs

 Control synchronized by processor manager

 Processor communication and cooperation

33
Thanks!

34

You might also like