You are on page 1of 8

5.

Producer-Consumer/Bounded Buffer Problem ASIYA S

➢ It is a data sharing problem where two threads/processes – a ‘Producer’ thread/process that


produces
data and a ‘Consumer’ thread/process that consumes the data produced by the Producer – concurrently
access a shared buffer of a fixed size without any synchronisation.

➢ If the Producer produces data and puts it into the shared buffer faster than the Consumer can
consume it,
it will lead to ‘buffer overrun’ where the producer tries to put data to a full buffer. Conversely, if the
Consumer consumes data faster the Producer can produce it, it will lead to ‘buffer under-run’ where the
Consumer tries to read from an empty buffer.

➢ Both of these conditions will lead to inaccurate data and/or data loss

➢ The following code snippet illustrates the producer-consumer problem where Producer_thread puts
data
in a shared buffer of size 25 for Consumer_thread to consume. If the Producer completely fills the
buffer,
it re-starts the filling of the buffer from the bottom and if the Consumer consumes all the data, it starts
consuming the data from the bottom of the buffer
#include <windows.h>
#include <stdio.h>
#define N 25 //Defining buffer size as 25
int buffer[N]; //Shared buffer for Producer and Consumer threads
//********************************************************************
//PRODUCER THREAD
void Producer_thread(void){
while(1){
for (int i = 0; i < N; ++i){
buffer[i] = i+1;
printf("Produced : Buffer[%d] = %4d\n", i,buffer[i]);
Sleep(50);
}
}
}
//********************************************************************
//CONSUMER THREAD
void Consumer_thread(void){
int value;
while(1){
for (int i = 0; i < N; ++i){
value = buffer[i];
printf("Consumed : Buffer[%d] = %4d\n", i,value);
Sleep(25);
}
}
}
//MAIN THREAD

Int main(){
DWORD Producer_thread_ID;
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE) Producer_thread,
NULL, 0, &Producer_thread_ID); //Creating Producer thread
DWORD Consumer_thread_ID;
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE) Consumer_thread,
NULL, 0, &Consumer_thread_ID); //Creating Consumer thread
Sleep(500); //Waiting for a while before exiting
Return 0;
}
Output
The two threads run independently and are scheduled for execution based on the scheduling policies
Adopted by the OS. Here, it is seen that the Consumer thread runs faster than the Producer thread as it
Is scheduled more frequently. This led to buffer under-run because it ends up repeatedly reading old
Data from the buffer. If the Producer thread were scheduled more frequently, buffer overrun would
occur because it over-writes data in the buffer.
6. Mutual Exclusion through Busy Waiting/Spin Lock

➢ The ‘Busy waiting’ technique uses a lock variable for implementing mutual exclusion

➢ Each process/thread checks this lock variable before entering the critical section. The lock is set to 1
by a
process/thread if it is already in its critical section; otherwise the lock is set to 0

➢ Using this implementation, the processes/threads are always kept busy and forced to wait for the
availability of the lock to proceed further. Hence this synchronisation mechanism is popularly known as
‘Busy waiting’

➢ It can also be visualised as a lock around which the process/thread spins, checking for its availability.
Spin
locks are useful in handling scenarios where the processes/threads are likely to be blocked for a shorter
period of time on waiting the lock, as they avoid OS overheads on context saving and process
rescheduling

➢ The major challenge in implementing the lock variable-based synchronisation is the non-availability of
a
single atomic instruction which combines the reading, comparing and setting of the lock variable. Often
the three different operations are achieved with multiple low-level instructions that are dependent on
the underlying processor instruction set and the (cross) compiler in use.

➢ Another drawback is that if the lock is being held for a long time by a process and if it is pre-empted
by
the OS, the other threads waiting for this lock may have to spin a longer time for getting it. The ‘Busy
waiting’ mechanism keeps the process/threads constantly active, thus performing a task which is not
useful and causing wastage of processor time and high power-consumption. This can be resolved using
interlocked operations, that is free from waiting and avoids the user mode to kernel mode transition
delay and thereby increases the overall performance

➢ Example code snippet:


//Inside the main thread
bool Lock; //Global declaration of lock variable
Lock= FALSE; //Initialise the lock to indicate that it is available
//Inside the child thread(s)
While(Lock == TRUE); //Checks the lock for availability
Lock = TRUE; //Acquiring the available lock
7. Mutual Exclusion through Sleep & Wakeup

➢ When a process/thread is not allowed to access the critical section, it undergoes ‘Sleep’ and enters
the
‘Blocked’ state

➢ It is awakened by the process/thread that has currently locked the critical section by sending a
wakeup
Message to the sleeping process/thread and then leaving the critical section

➢ The ‘Sleep & Wakeup’ policy for mutual exclusion can be implemented in different kernel-dependent
Ways. Some important techniques for implementing the ‘Sleep & Wakeup’ policy implementation for
Mutual exclusion by Windows XP/CE OS kernels are:

i. Semaphore – a system resource that the process/thread that wants to access a shared
resource can
Acquire. Based on the implementation of the sharing limitation of the shared resource, semaphores
Are classified into two, namely ‘Binary Semaphores’ and ‘Counting Semaphores’. The binary
Semaphore or Mutex provides exclusive access to the shared resource by allocating the resource to a
Single process/thread at a time. The counting semaphore limits the usage of the resource to the
Maximum value of the count supported and maintained by it

ii. Critical Section Objects – objects made by a process/thread can creating a ‘Critical Section’
area by

Creating a variable of type CRITICAL _SECTION. The Critical Section’ must be initialised using the
InitializeCriticalSection(LPCRITICAL_SECTION IpCriticalSection) API before the threads of a process can
Use it for getting exclusive access.

iii. Events – a notification mechanism where a thread/process waits for an event that is set by
another
Thread/process processing.

➢ Typically used in embedded systems that operate on battery


➢ Example code for Mutual Exclusion using Sleep and Wakeup using Semaphore:
#include <stdio.h>
#include <windows.h>
#define MAX_SEMAPHORE_COUNT 1 //Make semaphore object for exclusive use
#define thread_count 2 //number of Child Threads
//************************************
Char Buffer[10] = {1,2,3,4,5,6,7,8,9,10}; //shared buffer
Short int counter = 0;
HANDLE hSemaphore; //Define the handle to Semaphore object
//************************************
// Child Thread 1
Void Process_A (void){
For (int I = 0; i<5; i++){
If (Buffer[i] > 0){
//Wait for the signaling of Semaphore object
WaitForSingleObject(hSemaphore,INFINITE);
//Semaphore is acquired
Counter++;
Printf(“Process A : Counter = %d\n”,counter);
//Release theSemaphore Object
If (!ReleaseSemaphore(
hSemaphore, // handle to semaphore
1, // increase count by one
NULL // not interested in previous count
)){
//Semaphore Release failed
//Print Error code & return
Printf(“Release Semaphore Failed with Error
Code:%d\n”, GetLastError()) ;
Return;
}
}
}
Return;
}
//************************************
// Child Thread 2
Void Process_B (void){
For (int j = 5; j<10; j++){
If (Buffer[j]>0){
//Wait for the signalling of semaphore object
WaitForSingleObject (hSemaphore, INFINITE);
//Semaphore is acquired
Counter++;
Printf(“Process B : Counter = %d\n”, counter);
//Release Semaphore
If (!ReleaseSemaphore(
hSemaphore, // handle to semaphore
1, // increase count by one
NULL // not interested in previous count
)){
//Semaphore Release failed.
//Print Error code & return.
Printf(“Release Semaphore Failed Error Code: %d\n”,
GetLastError());
Return;
}
}
}
Return;
}
//************************************
//Main Thread
Void main(){
//Define HANDLE for child threads
HANDLE child_threads[thread_count];
DWORD thread_id;
//Create Semaphore object
hSemaphore = CreateSemaphore(
NULL, // default security attributes
MAX_SEMAPHORE_COUNT, //initial count
MAX_SEMAPHORE_COUNT, // maximum count
“Semaphore”); // Semaphore object with name “Semaphore”
If (NULL == hSemaphore){
Printf(“Semaphore Object Creation Failed: Error Code: %d”,
GetLastError());
//Semaphore Object Creation Failed
Return;
}
//Create Child thread 1
Child_threads[0]= CreateThread (NULL, 0,
(LPTHREAD_START_ROUTINE)Process_A, (LPVOID) 0, 0, &thread_id);
//Create Child thread 2
Child_threads[1]= CreateThread (NULL, 0,
(LPTHREAD_START_ROUTINE)Process_B, (LPVOID) 0, 0, &thread_id);
//Check the success of creation of child threads
For (int i=0; i<thread_count; i++){
If(NULL==child_threads[i]){
//Child thread creation failed.
Printf (“Child thread Creation failed with Error Code: %d”,
GetLastError ());
Return;
}
}
// Wait for the termination of child threads
WaitForMultipleObjects(thread_count, child_threads, TRUE, INFINITE);
//Close handles of child threads
For(int i=0; i<thread_count; i++) {
CloseHandle(child_threads[i]);
}
//Close Semaphore object handle
CloseHandle(hSemaphore);
Return;
}

You might also like