Professional Documents
Culture Documents
➢ If the Producer produces data and puts it into the shared buffer faster than the Consumer can
consume it,
it will lead to ‘buffer overrun’ where the producer tries to put data to a full buffer. Conversely, if the
Consumer consumes data faster the Producer can produce it, it will lead to ‘buffer under-run’ where the
Consumer tries to read from an empty buffer.
➢ Both of these conditions will lead to inaccurate data and/or data loss
➢ The following code snippet illustrates the producer-consumer problem where Producer_thread puts
data
in a shared buffer of size 25 for Consumer_thread to consume. If the Producer completely fills the
buffer,
it re-starts the filling of the buffer from the bottom and if the Consumer consumes all the data, it starts
consuming the data from the bottom of the buffer
#include <windows.h>
#include <stdio.h>
#define N 25 //Defining buffer size as 25
int buffer[N]; //Shared buffer for Producer and Consumer threads
//********************************************************************
//PRODUCER THREAD
void Producer_thread(void){
while(1){
for (int i = 0; i < N; ++i){
buffer[i] = i+1;
printf("Produced : Buffer[%d] = %4d\n", i,buffer[i]);
Sleep(50);
}
}
}
//********************************************************************
//CONSUMER THREAD
void Consumer_thread(void){
int value;
while(1){
for (int i = 0; i < N; ++i){
value = buffer[i];
printf("Consumed : Buffer[%d] = %4d\n", i,value);
Sleep(25);
}
}
}
//MAIN THREAD
Int main(){
DWORD Producer_thread_ID;
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE) Producer_thread,
NULL, 0, &Producer_thread_ID); //Creating Producer thread
DWORD Consumer_thread_ID;
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE) Consumer_thread,
NULL, 0, &Consumer_thread_ID); //Creating Consumer thread
Sleep(500); //Waiting for a while before exiting
Return 0;
}
Output
The two threads run independently and are scheduled for execution based on the scheduling policies
Adopted by the OS. Here, it is seen that the Consumer thread runs faster than the Producer thread as it
Is scheduled more frequently. This led to buffer under-run because it ends up repeatedly reading old
Data from the buffer. If the Producer thread were scheduled more frequently, buffer overrun would
occur because it over-writes data in the buffer.
6. Mutual Exclusion through Busy Waiting/Spin Lock
➢ The ‘Busy waiting’ technique uses a lock variable for implementing mutual exclusion
➢ Each process/thread checks this lock variable before entering the critical section. The lock is set to 1
by a
process/thread if it is already in its critical section; otherwise the lock is set to 0
➢ Using this implementation, the processes/threads are always kept busy and forced to wait for the
availability of the lock to proceed further. Hence this synchronisation mechanism is popularly known as
‘Busy waiting’
➢ It can also be visualised as a lock around which the process/thread spins, checking for its availability.
Spin
locks are useful in handling scenarios where the processes/threads are likely to be blocked for a shorter
period of time on waiting the lock, as they avoid OS overheads on context saving and process
rescheduling
➢ The major challenge in implementing the lock variable-based synchronisation is the non-availability of
a
single atomic instruction which combines the reading, comparing and setting of the lock variable. Often
the three different operations are achieved with multiple low-level instructions that are dependent on
the underlying processor instruction set and the (cross) compiler in use.
➢ Another drawback is that if the lock is being held for a long time by a process and if it is pre-empted
by
the OS, the other threads waiting for this lock may have to spin a longer time for getting it. The ‘Busy
waiting’ mechanism keeps the process/threads constantly active, thus performing a task which is not
useful and causing wastage of processor time and high power-consumption. This can be resolved using
interlocked operations, that is free from waiting and avoids the user mode to kernel mode transition
delay and thereby increases the overall performance
➢ When a process/thread is not allowed to access the critical section, it undergoes ‘Sleep’ and enters
the
‘Blocked’ state
➢ It is awakened by the process/thread that has currently locked the critical section by sending a
wakeup
Message to the sleeping process/thread and then leaving the critical section
➢ The ‘Sleep & Wakeup’ policy for mutual exclusion can be implemented in different kernel-dependent
Ways. Some important techniques for implementing the ‘Sleep & Wakeup’ policy implementation for
Mutual exclusion by Windows XP/CE OS kernels are:
i. Semaphore – a system resource that the process/thread that wants to access a shared
resource can
Acquire. Based on the implementation of the sharing limitation of the shared resource, semaphores
Are classified into two, namely ‘Binary Semaphores’ and ‘Counting Semaphores’. The binary
Semaphore or Mutex provides exclusive access to the shared resource by allocating the resource to a
Single process/thread at a time. The counting semaphore limits the usage of the resource to the
Maximum value of the count supported and maintained by it
ii. Critical Section Objects – objects made by a process/thread can creating a ‘Critical Section’
area by
Creating a variable of type CRITICAL _SECTION. The Critical Section’ must be initialised using the
InitializeCriticalSection(LPCRITICAL_SECTION IpCriticalSection) API before the threads of a process can
Use it for getting exclusive access.
iii. Events – a notification mechanism where a thread/process waits for an event that is set by
another
Thread/process processing.