You are on page 1of 16

UNIT-1 CONCURRENT AND PARALLEL PROGRAMMING

Syllabus: Concurrent versus sequential programming. Concurrent programming constructs and race
condition. Synchronization primitives.

______________________________________________________________________________

SEQUENTIAL PROGRAMMING:
Sequential programming is writing a application as a series of steps. Modern compiler might
rearrange the steps for faster execution. Almost anything with the exception of IO or
communication could be consider sequential, even steps in a thread could be considered
sequential.

All programs are sequential in that they execute a sequence of instructions in a pre-defined
order x=x+1

There is a single thread of execution or control

Sequential Program:

P;

Q;

R;

x = 1; // P

y = x + 1; // Q

x = y + 2; // R
For every possible execution of this program, P must precede Q and Q must precede R.
forall e: P ® Q ® R
The “®” operator means precedes or happens before. P ®Q means that P must begin before Q

1
begins and further P must finish before Q finishes ie. there is no overlap in the execution of the
instructions making up P & Q. If each component P & Q is made up of several instructions then:
forall e: p1 ® p2 ... ® pm ® q1 ® q2 ... ® qn

There is a total ordering of the instructions making up P & Q.

It is clear that the final values of the variables in the example program depend on the order that
statements are executed in. In general, given the same input data, a sequential program will
always execute the same sequence of instructions and it will always produce the same results.
Sequential program execution is deterministic.

The sequential paradigm has the following two characteristics:

● The textual order of statements specifies their order of execution;


● Successive statements must be executed without any overlap (in time) with one
another.

● Neither of these properties applies to concurrent programs.

CONCURRENT PROGRAMMING:

A concurrent program is a set of sequential programs that can be executed in parallel.

Concurrency means that multiple processes or threads are making progress concurrently. While
only one thread is executed at a time by the CPU, these threads can be switched in and out as
required. This means that no thread is actually completed totally before another is scheduled. So all
the threads are executing concurrently.

-An image that demonstrates concurrency is as follows:

In the above diagram, all the four threads are running concurrently. However, only one of them can
be scheduled on a processor at a time.
2
Levels of Concurrency

Low-Level Concurrency

In this level of concurrency, there is explicit use of atomic operations. We cannot use such kind of
concurrency for application building, as it is very error-prone and difficult to debug. Even Python
does not support such kind of concurrency.

Mid-Level Concurrency

In this concurrency, there is no use of explicit atomic operations. It uses the explicit locks. Python
and other programming languages support such kind of concurrency. Mostly application
programmers use this concurrency.

High-Level Concurrency

In this concurrency, neither explicit atomic operations nor explicit locks are used. Python has
concurrent, futures module to support such kind of concurrency.

Concurrency can be divided into different levels


● Instruction level is the execution of two or more machine instructions simultaneously
● Statement level is the execution of two or more statements simultaneously
● Unit level is the execution of two or more subprogram units simultaneously
● Program level is the execution of two or more programs simultaneously
● Concurrent control methods increase programming flexibility

CATEGORIES OF CONCURRENCY
 There are two distinct categories of concurrent unit control, physical concurrency and
logical concurrency.
 When physical concurrency happenswhen several program units from the same
program literally execute simultaneously on more than one processor.
 On the other hand, logical concurrency, happens when the execution of several
programs takes place in an interleaving fashion on a single processor.

A concurrent program is one consisting of two or more processes — threads of execution or


control

x=x+1 y=x

3
Each process is itself a sequential program.

It is often useful to be able to do several things at once:

• when latency (responsiveness) is an issue, e.g., server design, cancel buttons on dialogs,
etc.;

• when you want to parallelise your program, e.g., when you want to distribute your code
across multiple processors;

• when your program consists of a number of distributed parts, e.g., client–


server designs.

Concurrent designs can still be effective even if you only have a single processor:

Consider a client–server system for file downloads (e.g. BitTorrent, FTP)

• without concurrency

4
– it is impossible to interact with the client (e.g., to cancel the download or start
another one) while the download is in progress

– the server can only handle one download at a time—anyone else who requests a
file has to wait until your download is finished

 with concurrency

the user can interact with the client while a download is in progress (e.g., to cancel it, or start
another

More examples of concurrency

• GUI-based applications: e.g., javax.swing

• Mobile code: e.g., java.applet

• Web services: HTTP daemons, servlet engines, application servers

• Component-based software: Java beans often use threads internally

• I/O processing: concurrent programs can use time which would otherwise be wasted
waiting for slow I/O

• Real Time systems: operating systems, transaction processing systems, industrial


process control, embedded systems etc.

• Parallel processing: simulation of physical and biological systems, graphics, economic


forecasting etc.

Advantages of concurrent programs

• Reactive programming–User can interact with applications while tasks are running,
e.g., stopping the transfer of a big file in a web browser.

• Availability of services–Long-running tasks need not delay short-running ones, e.g., a web
server can serve an entry page while at the same time processing a complex query.

• Parallelism–Complex programs can make better use of multiple resources in newmulti-


core processor architectures, SMPs, LANs or WANs, e.g., scientific/engineering
applications, simulations, games, etc.

• Controllability–Tasks requiring certain preconditions can suspend and wait until the
5
preconditions hold, then resume execution transparently.

Disadvantages of concurrent programs

• Safety–«Nothing bad ever happens»–Concurrent tasks should not corrupt consistent


state of program

• Liveness–«Anything ever happens at all »–Tasks should not suspend and indefinitely
wait for each other (deadlock).

• Non-determinism–Mastering exponential number of interleaving due to different schedules.

• Resource consumption–Threads can be expensive. Overhead of scheduling, context-switching,


and synchronization.–Concurrent programs can run slower than their sequential counterparts
even with multiple CPUs!

Concurrent versus sequential programming:

Concurrent programming Sequential programming

Concurrent computation or simultaneous Sequential programming involves a consecutive


execution of processes or threads at the same and ordered the execution of processes one
time. after.

Multiple threads of execution are running A single thread of execution weaves its way
simultaneously through your program. through your program.

Multiple PC(Program counter) s are active, One A Single PC (Program counter ) identifies the
for each. current instruction being executed.

A simple example of this is consecutive


additions: 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = 45
(1 + 9) + (2 + 8) + (3 + 7) + (4 + 6) + 5 + 0 =

6
45

Concurrent Programming Constructs:

1) Interleaving:

Interleaving is a process or methodology to make a system more efficient, fast and reliable by
arranging data in a noncontiguous manner. Interleaving divides memory into small chunks. It is
used as a high-level technique to solve memory issues for motherboards and chips. By increasing
bandwidth so data can access chunks of memory, the overall performance of the processor and
system increases. This is because the processor can fetch and send more data to and from memory
in the same amount oftime.

There are various types of interleaving:

1. Two-Way Interleaving: Two memory blocks are accessed at same level for reading and
writing operations. The chance for overlapping exists.
2. Four-Way Interleaving: Four memory blocks are accessed at the same time.
3. Error-Correction Interleaving: Errors in communication systems occur in high volumes
rather than in single attacks. Interleaving controls these errors with specific algorithms.

Latency is one disadvantage of interleaving. Interleaving takes time and hides all kinds of error
structures, which are not efficient.

2) Mutual Exclusion:

A mutual exclusion (mutex) is a program object that prevents simultaneous access to a shared
resource. This concept is used in concurrent programming with a critical section, a piece of code in
which processes or threads access a shared resource. Only one thread owns the mutex at a time, thus
7
a mutex with a unique name is created when a program starts. When a thread holds a resource, it
has to lock the mutex from other threads to prevent concurrent access of the resource. Upon
releasing the resource, the thread unlocks the mutex. Mutex comes into the picture when two threads
work on the same data at the same time. It acts as a lock and is the most basic synchronization tool. When a
thread tries to acquire a mutex, it gains the mutex if it is available, otherwise the thread is set to sleep
condition.
3) Safety and Liveness property:

Any property of all executions of a concurrent program can be formulated in terms of safety and
liveness.

Safety: A safety property asserts that nothing "bad" happens throughout execution.

Liveness: A liveness property asserts that something "good" eventually does happen.

For example, the property that a program always produces the correct answer can be formulated
using one safety property and one liveness property. The safety property is that the program never
terminates with the wrong answer terminating with the wrong answer is the "bad thing". The
liveness property is that the program eventually does terminate--termination is the "good thing".
4) Semaphores:

Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows:

1. Wait

The wait operation decrements the value of its argument S, if it is positive. If S is negative
or zero, then no operation is performed.

wait(S)

while(S<=0);

S--;

2. Signal

The signal operation increments the value of its argument S.

signal(S)

8
{

S++;

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details
about these are given as follows:

1. Counting Semaphores

These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.

2. Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted to 0 and 1.
The wait operation only works when the semaphore is 1 and the signal operation succeeds
when semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.

5) Monitors:

Monitors and semaphores are used for process synchronization and allow processes to access the
shared resources using mutual exclusion. Monitors are abstract data types and contain shared data
variables and procedures. The shared data variables cannot be directly accessed by a process and
procedures are required to allow a single process to access the shared data variables at a time.

monitor monitorName

data variables;

Procedure P1(....)

Procedure P2(....)

{
9
}

ProcedurePn(....)

InitializationCode(....)

Only one process can be active in a monitor at a time. Other processes that need to access the shared
variables in a monitor have to line up in a queue and are only provided access when the previous
process releases the shared variables.
6) Channel (programming):

In computing, a channel is a model for  inter process


communication and synchronization via message passing. A message may be sent over a channel,
and another process or thread is able to receive messages sent over a channel it has a reference to, as
a stream. Different implementations of channels may be buffered or not, and either synchronous or
asynchronous.
Channel implementations
Channels modeled after the CSP model are inherently synchronous: a process waiting to receive an
object from a channel will block until the object is sent. This is also called rendezvous behaviour.
Typical supported operations are presented below using the example of the libthread channel API.

 Channel creation of fixed or variable size, returning a reference or handle

 Channel* chancreate(int elemsize, int bufsize)

 sending to a channel

 int chansend(Channel *c, void*v)

 receiving from a channel

 int chanrecv(Channel *c, void*v)

7) Message Passing:

10
Process communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process
know that some event has occurred or transferring of data from one process to another. One of the
models of process communication is the message passing model.
Message passing model allows multiple processes to read and write data to the message queue
without being connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for inter process communication and are used by
most operating systems

Two Models for Concurrent Programming

There are two common models for concurrent programming: shared memory and message
passing.

Shared memory. In the shared memory model of concurrency, concurrent modules interact by
reading and writing shared objects in memory.

Other examples of the shared-memory model:


● A and B might be two processors (or processor cores) in the same computer, sharing the
same physical memory.

● A and B might be two programs running on the same computer, sharing a common
file system with files they can read and write.

● A and B might be two threads in the same Java program (we’ll explain what a thread is
11
below), sharing the same Java objects.

Message passing. In the message-passing model, concurrent modules interact by sending


messages to each other through a communication channel. Modules send off messages, and
incoming messages to each module are queued up for handling. Examples include:

● A and B might be two computers in a network, communicating by network


connections.

● A and B might be a web browser and a web server – A opens a connection to B, asks for
a web page, and B sends the web page data back to A.

● A and B might be an instant messaging client and server.


● A and B might be two programs running on the same computer whose input and output
have been connected by a pipe, like ls | grep typed into a command prompt.

RACE CONDITION:

A race condition means that the correctness of the program (the satisfaction of post conditions
and invariants) depends on the relative timing of events in concurrent computations A and B.
When this happens, we say “A is in a race with B.”

Some interleaving of events may be OK, in the sense that they are consistent with what a single,
no concurrent process would produce, but other interleaving produce wrong answers – violating
post conditions or invariants.

● Race conditions
o When correctness of result (postconditions and invariants) depends on the relative
timing of events
These ideas connect to our three key properties of good software mostly in bad ways.
Concurrency is necessary but it causes serious problems for correctness. We’ll work on
fixing those problems.

● Safe from bugs. Concurrency bugs are some of the hardest bugs to find and fix, and
require careful design to avoid.
● Easy to understand. Predicting how concurrent code might interleave with other
concurrent code is very hard for programmers to do. It’s best to design in such a way
that programmers don’t have to think about that.

12
● Ready for change. Not particularly relevant here.
(Or)

Race condition
A race condition is an undesirable situation that occurs when a device or system attempts to perform
two or more operations at the same time, but because of the nature of the device or system, the
operations must be done in the proper sequence to be done correctly.
Race conditions are most commonly associated with computer science. In
computer memory or storage, a race condition may occur if commands to read and write a large
amount of data are received at almost the same instant, and the machine attempts to overwrite some
or all of the old data while that old data is still being read. The result may be one or more of the
following: a computer crash, an "illegal operation," notification and shutdown of the program,
errors reading the old data or errors writing the new data. A race condition can also occur if
instructions are processed in the incorrect order.

Suppose for a moment that two processes need to perform a bit flip at a specific memory location.
Under normal circumstances the operation should work like this:

Process 1 Process 2 Memory Value


Read value   0
Flip value   1
  Read value 1
  Flip value 0

In this example, Process 1 performs a bit flip, changing the memory value from 0 to 1. Process2
then performs a bit flip and changes the memory value from 1 to 0.

If a race condition occurred causing these two processes to overlap, the sequence could potentially
look more like this:

Process 1 Process 2 Memory Value


Read value   0
  Read value 0
Flip value   1
  Flip value 1

In this example, the bit has an ending value of 1 when its value should be 0. This occurs because
13
Process 2 is unaware that Process 1 is performing a simultaneous bit flip.

Security vulnerabilities caused by race conditions

When a program that is designed to handle tasks in a specific sequence is asked to perform two or
more operations simultaneously, an attacker can take advantage of the time gap between when the
service is initiated and when a security control takes effect in order to create a deadlock or thread
block situation.  With deadlock, two or more threads must wait for a lock in a circular chain. This
defect can cause the entire software system to halt because such locks can never be acquired if the
chain is circular. Thread block can also dramatically impact application performance. In this type of
concurrency defect, one thread calls a long-running operation while holding a lock and preventing
the progress of other threads. 

Preventing race conditions

In computing environments, race conditions can be prevented by serialization of memory or storage


access. This means if read and write commands are received close together, the read command is
executed and completed first by default.

In a network, a race condition may occur if two users attempt to access an available channel at the
same instant, and neither computer receives notification the channel is occupied before the system
grants access. Statistically, this kind of coincidence will most likely occur in networks with long lag
times, such as those that use geostationary satellites. To prevent such a race condition from
developing, a priority scheme must be devised. For example, the subscriber whose username begins
with the earlier letter of the alphabet (or the lower numeral) may get priority by default when two
subscribers attempt to access the system within a prescribed increment of time. Hackers can take
advantage of race-condition vulnerabilities to gain unauthorized access to networks.

SYNCHRONIZATION PRIMITIVES

Synchronization primitives are simple software mechanisms provided by a platform (e.g.


operating system) to its users for the purposes of supporting thread or process
synchronization. They're usually built using lower level mechanisms (e.g. atomic operations,
memory barriers, spinlocks, context switches etc).

Mutex, event, conditional variables and semaphores are all synchronization primitives. So are
shared and exclusive locks. Monitor is generally considered a high-level synchronization tool.
It's an object which guarantees mutual exclusion for its methods using other synchronization
primitives (usually exclusive locks with condition variables to support waiting and signaling).
In some contexts when monitor is used as a building block it is also considered a
synchronization primitive.

Mutex class
14
The System.Threading.Mutex class, like Monitor, grants exclusive access to a shared resource.
Use one of the Mutex.WaitOne method overloads to request the ownership of a mutex. Like
Monitor, Mutex has thread affinity and the thread that acquired a mutex must release it by
calling the Mutex.ReleaseMutex method.

Unlike Monitor, the Mutex class can be used for inter-process synchronization. To do that, use a
named mutex, which is visible throughout the operating system. To create a named mutex
instance, use a Mutex constructor that specifies a name. You also can call the
Mutex.OpenExisting method to open an existing named system mutex.

Semaphore and SemaphoreSlim classes


The System.Threading.Semaphore and System.Threading.SemaphoreSlim classes limit the
number of threads that can access a shared resource or a pool of resources concurrently.
Additional threads that request the resource wait until any thread
releases the semaphore. Because the semaphore doesn't have thread affinity, a thread can
acquire the semaphore and another one can release it.

SemaphoreSlim is a lightweight alternative to Semaphore and can be used only for


synchronization within a single process boundary.

On Windows, you can use Semaphore for the inter-process synchronization. To do that, create a
Semaphore instance that represents a named system semaphore by using one of the Semaphore
constructors that specifies a name or the Semaphore.OpenExisting method. SemaphoreSlim
doesn't support named system semaphores.

EventWaitHandle, AutoResetEvent, ManualResetEvent, and ManualResetEventSlim classes

The System.Threading.EventWaitHandle class represents a thread synchronization event.

A synchronization event can be either in an unsignaled or signaled state. When the state of an
event is unsignaled, a thread that calls the event's WaitOne overload is blocked until an event is
signaled. The EventWaitHandle.Set method sets the state of an event to signaled.

Monitor class
The System.Threading.Monitor class grants mutually exclusive access to a shared resource by
acquiring or releasing a lock on the object that identifies the resource. While a lock is held, the
thread that holds the lock can again acquire and release the lock. Any other thread is blocked
from acquiring the lock and the Monitor.Enter method waits until the lock is released. The Enter
method acquires a released lock. You can also use the Monitor.TryEnter method to specify the
amount of time during which a thread attempts to acquire a lock. Because the Monitor class has
thread affinity, the thread that acquired a lock must release the lock by calling the Monitor.Exit
method.

You can coordinate the interaction of threads that acquire a lock on the same object by using the
Monitor.Wait, Monitor.Pulse, and Monitor.PulseAll methods.

Condition variables are synchronization primitives that enable threads to wait until a particular

15
condition occurs. Condition variables are user-mode objects that cannot be shared across
processes.

Condition variables enable threads to atomically release a lock and enter the sleeping state.
They can be used with critical sections or slim reader/writer (SRW) locks. Condition variables
support operations that "wake one" or "wake all"
waiting threads. After a thread is woken, it re-acquires the lock it released when the thread
entered the sleeping state.

Condition variable function Description


InitializeConditionVariable Initializes a condition variable.
Sleeps on the specified condition variable and releases the
SleepConditionVariableCS specified critical section as an atomic operation.
Sleeps on the specified condition variable and releases the
SleepConditionVariableSRW specified SRW lock as an atomic operation.
WakeAllConditionVariable Wakes all threads waiting on the specified condition variable.
WakeConditionVariable Wakes a single thread waiting on the specified condition variable.

Critical section is not a synchronization primitive. It's a part of an execution path that must be
protected from concurrent execution in order to maintain some invariants. You need to use
some synchronization primitives to protect critical section.

16

You might also like