You are on page 1of 9

Operating Systems Cheat Sheet

by Makaila Akahoshi (makahoshi1) via

Introduction Introduction (cont) Problems that Processes run in to

What is an Operating System? an interrupt is an electronic signal , interrupts The Producer and Consumer Problem
serve as a mechanism for process
A program that acts as an intermediary in cooperating processes the produce-
cooperation and are often used to control I/O,
between a user and the hardware, it is between consumer problem is common where the
a program issues an interrupt to request the
the application program and the hardware producer process produces information that is
operating system support. The hardware
consumed by the consumer process
What are the three main purposes of an
requests an interrupt and then transfers the
operating system? Producer and Consumer Explained
control to the interrupt handler, where the
to provide an environment for a computer user interrupt then ends. he Producer relies on the Consumer to make
to execute programs. The operating System Structure space in the data-area so that it may insert
to allocate and separate resources of the more information whilst at the same time, the
the operating system utilizes
computer as needed. Consumer relies on the Producer to insert
multiprogramming. multiprogramming
to serve as a control program: supervise the information into the data area so that it may
organizes jobs so that the CPU always has
execution of user programs, management of remove that information
something to do, this allows no wasted time. in
the operation control of i/o devices examples of Producer - Consumer Problem
multiprogramming one job is selected and run
What does and Operating System Do? via the job scheduling. when it is waiting the os Client - Server paradigm, the client is the
Resource Allocator: reallocates the resources, switches to another job consumer and the server as the producer
manages all of the resources, decides between How does the operating system run a Solution to the Producer- Consumer
the requests for efficient and fair resource use. program, What does it need to do? problem
Control Program: Controls the execution of
1.) reserve machine time the solution and producer processes must run
programs to prevent errors and improper use of
2.) manually load urge program into memory concurrently, to allow this there needs to be an
the computer
3.) load starting address and begin execution available buffer of items that can be filled by
GOALS of the operating system 4.) monitor and control execution of program the producer and emptied by the consumer.
execute programs and make solving problems from console the producer can produce one item while the
easier, make the computer system easy to use, What is a process? consumer is consuming another item.
use the computer hardware in an efficient the producer and consumer must be
A process is a program in execution, it s active,
manner. synchronized, so that the consumer does not
while a program is passive. The program
try to consume an item that has not been
What happens when you start your becomes the process when it is running.
computer? The process needs resources to complete its
Two types of buffers can be used
when you start your computer the bootstrap task so it waits.
program is loaded at power-up or reboot. This A process includes: a counter, a stack, and a Unbounded buffer- no limit on the size of the
program is usually stored in the ROM or the data section. buffer
EROM generally known as Firmware . This What is process management? Bounded buffer- there is a fixed buffer size, in
program loads the operating system kernel and this case the consumer must wait if the buffer is
The operating system is responsible for
starts the execution. The one program running empty and the producer must wait if the buffer
managing the processes. The os
at all times is the kernel is full
1.) creates and deletes the user and system
What are interrupts and how are they used? processes. Bounded Buffer Solution

2.) suspends and resumes processes.

3.) provides mechanisms for process
4.) provides mechanisms for process
5.) provides mechanisms for deadlock handling

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 1 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Problems that Processes run in to (cont) CPU Scheduling (cont) CPU Scheduling (cont)

The bounded buffer can be used to enable 1) Switches from running to waiting state Various criterias used when comparing various
processes to share memory, in the example 2) Switches from running to ready state CPU- scheduling algorithms.
code the variable 'in' points to the next free 3) Switches from waiting to ready state a) CPU utilization: Keep the CPU as busy as
position in the buffer. 'out' points to the first full 4) Process terminates possible. Ranges from 0 to 10%, usually
position in the buffer. ranges from 40% (lightly loaded system) to 90%
The buffer is empty when in == out. The scheduling under 1 and 4 is (heavily loaded system).
when (in+1)% buffer size == out then the buffer nonpreemptive (or cooperative), otherwise it is b) Throughput: Measures the number of
is full preemptive. Preemptive: Priority to high priority processes that are completed per time unit.
processes, Nonpreemeptive: Running task is c) Turnaround time: Amount of time to execute
CPU Scheduling executed till completion and can not be a a particular process. It is the sum of time
interrupted. spent waiting to get into memory, waiting in the
What is CPU scheduling?
Potential issues with preemptive ready queue, executing on the CPU, and doing
The basis of multiprogrammed operating scheduling? I/O.
systems d) Waiting time: The time spent waiting in the
1) Processes that share data: While one is in a
What is the basic concept of CPU ready queue.
state of updating its data, another process is
scheduling? e) Response time: The amount of time it takes
given priority to run but can not read the data
to produce a response after a submission of a
To be able to have a process running at all time from the first process.
request. Generally limited by the speed of the
to maximize CPU utilization. The operating
output device.
system takes the CPU away from a process 2) Operating system kernal: Another process
that is in wait, and gives the CPU to another might be given priority while the kernal is being
Best to maximize CPU utilization and
process. utilized by another process. The kernal might
throughput, and minimize turnaround time,
be going through important data changes,
What is a CPU-I/O Burst Cycle? waiting time, and response time, but can still
leaving it in a vulnerable state. A possible
The process execution cycle where the process vary depending on the task.
solution is waiting for the kernal to return to a
alternates between CPU execution and I/O consistent state before starting another Describe the First-Come, First-Served
wait. Begins with CPU burst, then I/O burst, and process. scheduling algorithm
then CPU burst, and so on. The CPU burst
What is the dispatcher? The process that requests the CPU first is
eventually ends with a system request to
It is a module that gives control of the CPU to allocated the CPU first. The Gantt chart
terminate execution.
the process selected by the CPU scheduler. illustrates a schedule of start and finish times
What is a CPU Scheduler? (Also called
This involves {{nl})a) switching context of each process. The average waiting time is
short-term scheduler) heavily dependent on the order of arrival of the
b) switching to user mode
Carries out a selection process that picks a c) jumping to the proper location in the user processes. If a processes with longer burst
process in the ready que to be executed if the program to restart that program. time arrive first, the entire process order will
CPU becomes idle. It then allocates the CPU to now have a longer average wait time. This
that process. It is involked in every process switch; the time effect is called the convoy effect

When might a CPU scheduling decision the dispatcher takes to stop one process and Describe the short-job-first scheduling
happen? start another is the dispatch latency. algorithm
Describe the Scheduling Criteria

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 2 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

CPU Scheduling (cont) CPU Scheduling (cont) CPU Scheduling (cont)

Associates processes by the length of their A priority number is assigned to each process Similar to first-come-first-serve, but each
next CPU burst and gives CPU priority to the based on its CPU burst. Higher burst gets a process is given a unit of time called the time
process with the smallest next CPU burst. If the lower priority and vice versa. quantum (usually between 10 to 100
next CPU burst of multiple processes are the milisecond), where the CPU is given to the next
same, First-come-first-serve scheduling is Interally defined priority uses measurable process after the time quantum(s) for the
used. It is difficult to know the length of the next qualities such as average I/O burst, time limits, current process is over - regardless of if the
CPU request even though it is optimal over memory requirements, etc. process is finished or not. If the process is
FCFS. externally defined priorities are criteria set not interrupted, it is prempted and put back in the

What is exponential averaging? by the OS, mostly human qualities like the type ready queue. Depending on the size of the time
of work, importance of the process in relation to quantum, the RR policy can appear like a
Uses the previous CPU bursts to predict future first-come-first-serve policy or processor
business, amount of funds being paid, etc.
bursts. The formula is Tn+1 = tn+ (1 - ) Tn.
sharing, where it creates an appearance that
tn is the length of the nth CPU burst and Tn+1
Preemptive priority will ask the CPU if the each processor has its own processor because
is the predicted value of the next burst. is a
newly arrived process is higher priority than the it is switching from one process to the next so
value from 0 to 1. If = 0 the recent history has
currently running process. A nonpreemptive quickly.
no effect and current conditions are seen as
priority will simply put the new process at the
consistent. If = 1 then only the most recent
head of the queue. Turnaround time is dependent on the size of the
CPU burst matters. Most commonly =1/2,
time quantum, where the average turnaround
where recent and past history are equally Potential problems with priority
time does not always improve as the time
weighted. In a shortest-remaining-time-first scheduling?
quantum size increased, but improves when
exponential averaging, you line up the previous
indefinite blocking (also called starvation): A most processes finish their CPU burst in a
processes based on their burst times ascending
process that is ready to run is left waiting single time quantum. A rule of thumb is 80% of
indefinitely because the computer is constantly CPU bursts should be shorter than the time
Example of shortest-remaining-time-first
getting higher-priority processes. Aging is a quantum in order to keep the context switches
exponential averaging: If T1 = 10 and = 0.5
solution where the priority of waiting processes low.
and the previous runs are 8,7,4,16.
are increased as time goes on. Describe the multilevel queue scheduling
T2=.5(4+10) =7
T3=.5(7+7)=7 Describe the Round-Robin scheduling It is a method of scheduling algorithm that
T3=.5(8+7)=7.5 algorithm separates priority based on the type of
T4=.5(16+7.5)=11.25 processes in this order:
What is priority scheduling? 1) system processes
2)interactive proesses
3)interactive editing processes
4)batch processes
5)student processes

Each queue also has it's own scheduling

alogorithm, so System processes could use
FCFS while student processes use RR. Each
queue has absolute priority over lower priority
queues, but it is possible to time-slice among
queues so each queue gets a certain portion of
CPU time.

Describe a multilevel feedback queue


By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 3 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

CPU Scheduling (cont) CPU Scheduling (cont) Processes

Works similarly as the multilevel queue Describe how multiple-processor Objective of multiprogramming
scheduler, but can separate queues further scheduling works
is to have some process running at all times, to
based on their CPU bursts. its parameters are:
Multiple processes are balances between maximize CPU utilization.
The number of queues
multiple processors through load sharing. One How does multiprogramming work?
The scheduling algorithm for each queue
approach is through asymmetric
The method used to determine when to several processes are stored in memory at one
multiprocessing where one processor acts
upgrade a process time, when one process is done and is waiting,
as the master, is in charge of all the
the method used to determine when to demote the os takes the CPU away from that process
scheduling, controls all activites, and runs all
a process and gives it to another process
kernel code, while the rest of the processors
The method used to determine which queue a
only run user code. symmetric Benefits of multiprogramming
process will enter when the process needs
multiprocessing, SMP has each processor higher throughput (amount of work
self schedule throgh a common ready queue, accomplished in a given time interval) and

It is by definition the most general CPU- or seperate ready queues for each processor. increased CPU utilization

scheduling algorithm, but it is also the most Almost all modern OSes support SMP.
What is a process?
A process is a program in execution
Processors contain cache memory and if a
Describe thread scheduling
process were switch from processor to another, What do processes need?
User level threads: Managed by a thread the cache data would be invalidated and have
A process needs: CPU time, memory, files, and
library that the kernel is unaware of and is to be reloaded. SMP tries to keep familiar
i/o devices
mapped to an associated kernel level thread, processes on the same processor through
What are the Process States?
and runs on available light weight process. This processor affinity. Soft affinity attempts to
is called process-contention scope (PCS) keep processes on the same processor, but New: the process is being created
since it makes threads belonging to the same makes no guarantees. Hard affinity specifies Ready: The process is waiting to be assigned
process compete for CPU. Priority is set by the that a process is not moved between to a processor
programmer and not adjusted by the thread processors. Waiting: The process is waiting for some event
library to occur
Load balancing tries to balance the work Running: instructions are being executed
Kernel-level threads: Scheduled by the done between processors so none sits idle Terminated: the process has finished execution
operating system, and uses the while another is overloaded. Push migration Why is the operating system good for
system-contention scope to schedule kernel uses a separate process that runs periodically resource allocation?
threads onto a CPU, where competition for the and moves processes from heavily loaded
The operating system is good for resource
CPU takes place among all threads in the processors onto less loaded ones.
allocation because it acts as hardware
system. PCS is done according to priority Pull migration makes idole processors take
/software interface
Describe Pthread scheduling processes from other processors. Push and pull
are not mutually exclusive, and can also What does a Process include?
The POSIX Pthread API allows for specifying
counteract possessor affinity if not carefully 1.) a program counter
either PCS or SCS during thread creation
managed. 2.) stack: contains temporary data
To remedy this, modern hardware designs 3.) data section: contains global variables
schedules threads using PCS and
implemented multithreaded processor cores in What is the Process Control Block?
which two or more hardware threads are
On systems with the many-to-many model, the
assigned to each core, so if one is stalling the
core can switch to another thread.
schedules user-level threads onto LWPs,
creates and binds LWP for each user-level
thread. Linux and Mac OS X only allow

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 4 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Processes (cont) Processes (cont) Processes (cont)

processes are represented in the operating How does the operating decide which every process has a process id, to know what
system by a PCB queue the program goes to? process you are on and for process
The process control block includes: management every process has an id
it is based on what resources the program
1.) Process state very important when a process is created with
needs, and it will be placed in the
2.) program counter the fork() only the shared memory segments
corresponding queue
3.) CPU registers are shared between the parent process and the
4.) CPU scheduling information What are the types of Schedulers? child process, copies of the stack and the heap
5.) Memory Management 1.) long term scheduler: selects which are made for the new child
6.) Accounting information processes should be brought into the ready Process Creation Continue
7.) I/O status queue
when a process creates a new process the
Why is the PCB created? 2.) short term scheduler: selects which process
parent can continue to run concurrently or the
should be executed next and then allocates
A process control block is created so that the parent can wait until all of the children terminate
operating system knows information on the
How are processes terminated?
process. What is a context switch?
A process terminates when it is done executing
What happens when a program enters the a context switch is needed so that the CPU can
the last statement, when the child is terminated
system? switch to another process, in the context switch
it may return data back to the parent through an
urge system saves the state of the process
When a program enters the system it is placed exit status uses the exit() system call
Processes run concurrently
in the queue by the queuing routine and the Can a process terminate if it is not done?
scheduler redirects the program from the queue No two processes can be running
Yes, the parent may terminate the child (abort)
and loads it into memory simultaneously (at the same time) but they can
Why are queues and schedulers important? be running concurrently where the CPU is
the child has exceeded its usage of some of its
they determine which program is loaded into resources it has been allocated
How are processes created? the task assigned to the child is no longer
memory after one program finishes processes
and when the space is available The parent creates the child which can create needed

What is a CPU switch and how is it used? more processes. wait() or waitpid()
The child process is a duplicate of the parent
when the os does a switch it stops one process these are the system call command that are
from executing (idling it) and allows another used for process termination
process to use the processor fork()
Cascading Termination
What is process scheduling? fork creates a new process
some operating systems do not allow children
when you run the fork command it either
the process scheduler selects among the to be alive if the parent has died, in this case if
returns a 0 or a 1.
available processes for next execution on CPU the parent is terminated, then the children must
the 0 means that it is a child process
also terminate. this is known as cascading
QUEUE the 1 means that it is a parent process
generally the first program on the queue is execve()
Processes may be either Cooperating or
loaded first but there are situations where there
the execve system call is used to assign a new Independent
are multiple queues,
program to a child.
1.) job Queue: when processes enter the
it is used after the fork command to replace the
system they are put into the job queue
process' memory space with a new program
2.) Ready Queue: the processes that areready
and waiting to execute are kept on a list (the Process Creation

ready queue)
3.) Device Queue: are the processes that are
waiting for a particular i/o device (each device
has its own device queue

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 5 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Processes (cont) Processes (cont) Processes (cont)

Cooperating: the process may be cooperating message passing is a mechanism for 1.) Links are established automatically
if it can affect or be affectedly the other processes to communicate and to synchronize 2.) A link is associated with one pair of
processes executing in the system. their actions. processes communicate with communicating processes
Some characteristics of cooperating processes each other without sharing variables 3.) between each pair there exists exactly one
include: state is shared, the result of execution Benefits of message passing: message link
is nondeterministic, result of execution cannot passing is easier to implement for inter 4.) the link may be unidirectional or
be predicted. computer communication and is useful for bidirectional (usually bidirectional)
Independent: a process can be independent if it smaller amounts of data
Properties of Indirect Communication Links
cannot be affected or affect the other Message passing can be either Blocking or
1.) Link established only if processes share a
processes. Non-Blocking
Some characteristics of independent processes
Message Passing facilitates:
2.) a link may be associated with many
include: state not shared, execution is
the message passing facility provides two processes
deterministic and depends on input, execution
operations: 3.) each pair of processes may share several
is reproducible and will always be the same, or
send(message)- message size fixed or variable communication links
if the execution can be stopped.
receive(message) 4.) link may be unidirectional or bidriectional
Advantages of Process Cooperation
How do processes P and Q communicate Message-Passing Synchronization
information sharing, computation speed-up,
modularity, convenience for two processes to communicate they must: Message Passing may be either blocking or non
1.) send messages to an receive messages blocking
What is Interprocess Communication
from each other Blocking is considered synchronous, sends
Cooperating processes need interprocess 2.) they must establish a communication link and receives until a message is
communication a mechanism that will allow between them, this link can be implemented in available/received
them to exchange data and information a variety of ways. Nonblocking is considered asynchronous, the
There are two models of IPC: shared memory sender sends process and resumes operation,
Implementations of communication link
and Message passing the receiver retrieves either a message or null
What is Shared Memory? Buffering
1.) physical (ex. shared memory, hardware
a region of memory that is shared by bus) In both direct and indirect communication
cooperating processes is established, 2.) logical (direct/indirect, messages exchanges are placed in a
processes can exchange information by synchronous/asynchronous, automatic/explicit temporary queue. These queues are
reading and writing to the shared region buffering implemented in three ways
Benefits of Shared Memory: allows maximum 1.) zero capacity: has a max length of 0, the
Direct vs. Indirect Communication Links
speed and convenience of communication and link cannot have any messages waiting in it.
is faster than message passing. Direct Communication Link: processes must sender blocks until recepient receives
name each other explicitly, they must state Bounded capacity: the queue has finite length
What is Message Passing
where they are sending the message and n, at most n messages can be placed there.
where they are receiving the message. the sender must wait if link is full
this can be either symmetric where they both Unbounded Capacity: the queues length is
name each other or asymmetric where only the potentially infinite, any number of messages
sender names the receipient can wait in it. the sender never blocks

Other strategies for communication

Indirect Communication Link: messages are
sent to and received from mailboxes or ports. Some other ways for communication include:
Sockets, Remote Procedure Calls, and Pipes
Properties of Direct Communication Link

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 6 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Processes (cont) Threads (cont) Threads (cont)

sockets is defined as an endpoint for Responsiveness: Threads may provide rapid Thread Libraries
communication, need a pair of sockets-- one response while other threads are busy.
Provide programmers with an API for creating
for each process. Resource Sharing: Threads share common
and managing threads. Implemented either in
A socket is defined by an IP address code, data, and other resources, which allows User Space or Kernel Space.
concatenated with a port number multiple tasks to be performed simultaneously User Space: API functions are implemented
Remote Procedure Controls in a single address space.
solely within user space. & no kernel support.
Economy: Creating and managing threads is
A way to abstract the procedure-call Kernel Space: Involves system calls and
much faster than performing the same tasks for
mechanism for use between systems with requires a kernel with thread library support.
network connections. Three main thread libraries:
Scalability: A single threaded process can
the RPC scheme is useful in implementing a
only run on one CPU, whereas the execution of POSIX Pthreads: Provided as either a user or
distributed file system
a multi-threaded application may be split kernel library, as an extension to the POSIX
amongst available processors. standard.
A pipe acts as a conduit allowing two Win32 Threads: Provided as a kernel-level
Multicore Programming Challenges
processes to communicate. Pipes were one of library on Windows systems.
Dividing Tasks: Examining applications to find
the first IPC mechanisms and provided one of Java Threads: Implementation of threads is
the simpler ways for processes to activities that can be performed concurrently.
based upon whatever OS and hardware the
Balance: Finding tasks to run concurrently
communicate with one another, there are JVM is running on, i.e. either Pthreads or
however limitations that provide equal value. I.e. don't waste a Win32 threads depending on the system.
thread on trivial tasks.
Ordinary Pipes Pthreads
Data Splitting: To prevent the threads from
allow communication in standard interfering with one another. *POSIX standard defines the specification for
producer-consumer style Data Dependency: If one task is dependent pThreads, not the implementation.
ordinary pipes are unidirectional upon the results of another, then the tasks * Global variables are shared amongst all
an ordinary pipe cannot be accessed from need to be synchronized to assure access in threads.
outside the process that creates it. typically a the proper order. *One thread can wait for the others to rejoin
parent process creates a pipe and uses it to Testing and Debugging: Inherently more before continuing.
communicate with a child process *Available on Solaris, Linux, Mac OSX, Tru64,
difficult in parallel processing situations, as the
only exists while the processes are and via public domain shareware for Windows.
race conditions become much more complex
and difficult to identify. Java Threads
Named Pipes
Multithreading Models *Managed by the JVM
more powerful than ordinary pipes Many-To-One: Many user-level threads are all *Imlemented using the threads model provided
communication can be bidirectional by underlying OS.
mapped onto a single kernel thread.
no parent- child relationship is required *Threads are created by extending thread class
One-To-One: Creates a separate kernel thread
once a name pipe is established several and by implementing the Runnable interface.
to handle each user thread. Most
processes can be used for named pipes
implementations of this model place a limit on Thread Pools
how many threads can be created.
Many-To-Many: Allows many user level

What is a thread? threads to be mapped to many kernel threads.

Processes can be split across multiple
A basic unit of CPU utilization, consisting of a
processors. Allows the OS to create a sufficient
program counter, a stack, and a set of
number of kernel threads.
registers. They also form the basics of

Benefits of multi-threading?

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 7 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Threads (cont) Threads (cont) Synchronization

A solution that creates a number of threads The fork( ) and exec( ) System Calls Background
when a process first starts, and places them Q: If one thread forks, is the entire process Concurrent access to shared data may
into a thread pool to avoid inefficient thread copied, or is the new process single-threaded? result in data inconsistency. Maintaining
use. *A: System dependent. data consistency requires mechanisms to
* Threads are allocated from the pool as *A: If the new process execs right away, there ensure the orderly execution of cooperating
needed, and returned to the pool when no is no need to copy all the other threads. If it processes.
longer needed. doesn't, then the entire process should be
* When no threads are available in the pool, the Race Condition
process may have to wait until one becomes *A: Many versions of UNIX provide multiple A situation where several processes access
available. versions of the fork call for this purpose. and manipulate the same data concurrently
* The max. number of threads available in a Signal Handling used to process signals by and the outcome of the execution depends
thread pool may be determined by adjustable generating a particular event, delivering it to a on the particular order in which the access
parameters, possibly dynamically in response process, and handling it.) take place.
to changing system loads. Q: When a multi-threaded process receives a
Critical Section
Threading Issues signal, to what thread should that signal be
Each process has a critical section segment
of code. When one process is in critical
A: There are four major options:
section, no other may be in its critical
*Deliver the signal to the thread to which the
signal applies.
* Deliver the signal to every thread in the Parts of Critical Section
Each process must ask permission to enter
*Deliver the signal to certain threads in the
critical section in entry section, may follow
critical section with exit section, then
* Assign a specific thread to receive all signals
remainder section.
in a process.
Thread Cancellation can be done in one of Solutions to Critical Section Problem
two ways
The three possible solutions are Mutual
*Asynchronous Cancellation: cancels the
Exclusion, progress, and bounded
thread immediately.
*Deferred Cancellation: sets a flag indicating
the thread should cancel itself when it is Mutual Exclusion
convenient. It is then up to the cancelled thread
If process Pi is executing in its critical
to check this flag periodically and exit nicely
section, then no other processes can be
when it sees the flag set.
executing in their critical sections.
Scheduler Activations
Provide Upcalls, a communication mechanism Progress

from the kernel to the thread library. This If no process is executing in its critical
communication allows an application to section and there exists some processes
maintain the correct number kernel threads. that wish to execute their critical section,
then the selection of the process that will
enter the critical section cannot be
postponed indefinitely.

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 8 of 9.
Operating Systems Cheat Sheet
by Makaila Akahoshi (makahoshi1) via

Synchronization (cont) Synchronization (cont) Semaphore (cont)

Bounded Waiting Bounded Buffer Problem Semaphores are used in cases where you
have n amount of processes in a critical section
A bound must exist on the number of times The problem describes two processes, the
that other processes are allowed to enter producer and the consumer, who share a
their critical sections after a process has common, fixed-size buffer used as a queue. How do you initialize a semaphore
made a request to enter its critical section The producer's job is to generate a piece of
You initialize a semaphore with the
and before that request is granted. data, put it into the buffer and start again. At
semaphore_init --> the shared int sem holds
the same time, the consumer is consuming
Peterson's Solution the semaphore identifier
the data (i.e., removing it from the buffer)
Two process solution. Assume that LOAD Wait and Signal
one piece at a time. The problem is to make
and STORE instructions are atomic; that is, sure that the producer won't try to add data A simple way to understand wait (P) and signal
cannot be interrupted. The two processes into the buffer if it's full and that the (V) operations is: wait: If the value of
share two variables: int turn and Boolean consumer won't try to remove data from an semaphore variable is not negative,
flag[2]. Turn indicates whose turn it is to empty buffer. decrements it by 1. If the semaphore variable is
enter the critical section. The flag array is
now negative, the process executing wait is
Readers Writers Problem
used to indicate if a process is ready to enter blocked (i.e., added to the semaphore's queue)
the critical section. The problem is that you want multiple
until the value is greater or equal to 1.
readers to be able to read at the same time Otherwise, the process continues execution,
Synchronization Hardware
but only one single writer can access the
having used a unit of the resource. signal:
Many systems provide hardware support for shared data at a time.
Increments the value of semaphore variable by
critical section code. Modern machines
Dining Philosophers Problem 1. After the increment, if the pre-increment
provide special atomic hardware
value was negative (meaning there are
instructions. Monitors
processes waiting for a resource), it transfers a
Semaphore A high level abstraction. Abstract data type, blocked process from the semaphore's waiting
internal variables only accessible by code queue to the ready queue.
A semaphore is a synchronization tool that
within the procedure. Only one process may
does not require busy waiting. You can have How do we get rid of the busy waiting
be active within the monitor at a given time.
a counting semaphore or a binary problem?
semaphore. Semaphores provide mutual
Rather than busy waiting the process can block
exclusion. Race Condition
itself, the block operation places a process into
Semaphore Implementation A situation where several processes access a waiting queue and the state is changed to
and manipulate the same data concurrently waiting
When implementing semaphores you must
and the outcome of the execution depends
guarantee that no two processes can Signal
on the particular order in which the access
execute wait () and signal () on the same
take place. A process that is blocked can be waked up
semaphore at the same time.
which is done with the signal operation that
Deadlock removes one process from the list of waiting
processes, the wakeup resumes the operation
Deadlock is when two or more processes
Semaphore of the blocked process.
are waiting indefinitely for an even that can
removes the busy waiting from the entry of the
only be caused by one of the waiting A generalization of a spin-lock process, a more
critical section of the application
processes. complex kind of mutual exclusion
The semaphore provides mutual exclusion in a in busy waiting there may be a negative
protected region for groups of processes, not semaphore value
Indefinite blocking. A process may never be just one process at a time
Deadlocks and Starvation
removed from the semaphore queue in
Why are semaphores used?
which it is suspended.

By Makaila Akahoshi Published 21st October, 2015. Sponsored by

(makahoshi1) Last updated 13th May, 2016. Measure your website readability! Page 9 of 9.