You are on page 1of 114

TSN2101

Operating Systems
Mid-Term Examination Revision Note
Table of Contents
Description Pag
e
Lecture 01 Introduction to Operating Systems 03
Tutorial 01 and Computer System Structures 14
Lecture 02 18
Operating System Structures
Tutorial 02 32
Lecture 03 35
Processes and Threads
Tutorial 03 48
Lecture 04 51
CPU Scheduling
Tutorial 04 60
Lecture 05 64
Process Synchronization
Tutorial 05 74
Lecture 06 77
Deadlocks
Tutorial 06 89
Past Examination Paper Questions 91
TSN2101 Operating Systems Revision Note

Lecture 01: Introduction to Operating Systems and Computer


Science Structures
1. Operating System
 A program that acts as an intermediary between a user of a computer and the computer
hardware
 Operating system goals:
 Execute user programs and make solving user problems easier.
 Make the computer system convenient to use.
 Use the computer hardware in an efficient manner.

2. Computer System Structure


 Computer system can be divided into four components
i. Hardware – provides basic computing resources
 CPU, memory, I/O devices
ii. Operating system
 Controls and coordinates use of hardware among various applications and users
iii. Application programs – define the ways in which the system resources are used to
solve the computing problems of the users
 Word processors, compilers, web browsers, database systems, video games
iv. Users
 People, machines, other computers

3. Operating System Definition

Kirby510’s Website Blog 4 August 2013


TSN2101 Operating Systems Revision Note

 Operating System is a resource allocator


 Manages all resources
 Decides between conflicting requests for efficient and fair resource use
 Operating System is a control program
 Controls execution of programs to prevent errors and improper use of the computer
 No universally accepted definition
 “Everything a vendor ships when you order an operating system” is good
approximation
 But varies wildly
 “The one program running at all times on the computer” is the kernel. Everything else
is either a system program (ships with the operating system) or an application program

4. Computer Startup
 Bootstrap program is loaded at power-up or reboot
 Typically stored in ROM or EPROM, generally known as firmware
 Initializes all aspects of system
 Loads operating system kernel and starts execution

5. Computer System Organization


 Computer-system operation
 One or more CPUs, device controllers connect through common bus providing
access to shared memory
 Concurrent execution of CPUs and devices competing for memory cycles

 I/O devices and the CPU can execute concurrently


 Each device controller is in charge of a particular device type
 Each device controller has a local buffer
 CPU moves data from / to main memory to / from local buffers
 I/O is from the device to local buffer of controller
 Device controller informs CPU that is has finished its operation by causing an interrupt

6. Common Functions of Interrupts

Kirby510’s Website Blog 5 August 2013


TSN2101 Operating Systems Revision Note

 Interrupt transfers control to the interrupt service routine generally, through the
interrupt vector, which contains the addresses of all the service routines.
 Interrupt architecture must save the address of the interrupted instruction
 Incoming interrupts are disabled while another interrupt is being processed to prevent a
lost interrupt
 A trap is a software-generated interrupt caused either by an error or a user request
 An operating system is interrupt driven

7. Interrupt Handling
 The operating system preserves the state of the CPU by storing registers and the
program counter
 Separate segments of code determine what action should be taken for each type of
interrupt
 These codes are accessed through interrupt vector – the interrupt vector contains the
addresses of where these codes are located in memory

8. Interrupt Timeline

9. I/O Structure
 After I/O starts, control returns to user program only upon I/O completion

Kirby510’s Website Blog 6 August 2013


TSN2101 Operating Systems Revision Note

 Wait instruction idles the CPU until the next interrupt


 Wait loop (contention for memory access)
 At most one I/O request is outstanding at a time, no simultaneous I/O processing
 After I/O starts, control returns to user program without waiting for I/O completion
 System call – request to the operating system to allow user to wait for I/O
completion
 Device-status table contains entry for each I/O device indicating its type, address,
and state
 Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt

10. Direct Memory Access Structure


 Used for high-speed I/O devices able to transmit information at close to memory speeds
 Device controller transfers blocks of data from buffer storage directly to main memory
without CPU intervention
 Only one interrupt is generated per block, rather than the one interrupt per byte

11. Storage Structure


 Main memory – only large storage media that the CPU can access directly
 Secondary storage – extension of main memory that provides large nonvolatile storage
capacity
 Magnetic disks – rigid metal or glass platters covered with magnetic recording material
 Disk surface is logically divided into tracks, which are subdivided into sectors
 The disk controller determines the logical interaction between the device and the
computer

12. Storage Hierarchy


 Storage systems organized in hierarchy
 Speed
 Cost
 Volatility
 Caching – copying information into faster storage system; main memory can be
viewed as a last cache for secondary storage

13. Storage Device Hierarchy

Kirby510’s Website Blog 7 August 2013


TSN2101 Operating Systems Revision Note

14. Caching
 Important principle, performed at many levels in a computer (in hardware, operating
system, software)
 Information in use copied from slower to faster storage temporarily
 Faster storage (cache) checked first to determine if information is there
 If it is, information used directly from the cache (fast)
 If not, data copied to cache and used there
 Cache smaller than storage being cached
 Cache management important design problem
 Cache size and replacement policy

15. Computer-System Architecture


 Most systems use a single general-purpose processor (PDAs through mainframes)

Kirby510’s Website Blog 8 August 2013


TSN2101 Operating Systems Revision Note

 Most systems have special-purpose processors as well


 Multiprocessors systems growing in use and importance
 Also known as parallel systems, tightly-coupled systems
 Advantages include
i. Increased throughput
ii. Economy of scale
iii. Increased reliability – graceful degradation or fault tolerance
 Two types
i. Asymmetric Multiprocessing
ii. Symmetric Multiprocessing

16. How a Modern Computer Works

17. Symmetric Multiprocessing Architecture

Kirby510’s Website Blog 9 August 2013


TSN2101 Operating Systems Revision Note

18. A Dual-Core Design

19. Clustered Systems


 Like multiprocessor systems, but multiple systems working together

Kirby510’s Website Blog 10 August 2013


TSN2101 Operating Systems Revision Note

 Usually sharing storage via a storage-area network (SAN)


 Provides a high-availability service which survives failures
o Asymmetric clustering has one machine in hot-standby mode
o Symmetric clustering has multiple nodes running applications, monitoring
each other
 Some clusters are for high-performance computing (HPC)
o Applications must be written to use parallelization

20. Operating System Structure


 Multiprogramming needed for efficiency
 Single user cannot keep CPU and I/O devices busy at all times
 Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
 A subset of total jobs in system is kept in memory
 One job selected and run via job scheduling
 When it has to wait (for I/O for example), OS switches to another job
 Timesharing (multitasking) is logical extension in which CPU switches jobs so
frequently that users can interact with each job while it is running, creating interactive
computing
 Response time should be < 1 second
 Each user has at least one program executing in memory  process
 If several jobs ready to run at the same time  CPU scheduling
 If processes don’t fit in memory, swapping moves them in and out to run
 Virtual memory allows execution of processes not completely in memory

21. Memory Layout for Multi-programmed System

22. Operating System Operations


 Interrupt driven by hardware

Kirby510’s Website Blog 11 August 2013


TSN2101 Operating Systems Revision Note

 Software error or request creates exception or trap


 Division by zero, request for operating system service
 Other process problems include infinite loop, processes modifying each other or the
operating system
 Dual-mode operation allows OS to protect itself and other system components
 User mode and kernel mode
 Mode bit provided by hardware
o Provides ability to distinguish when system is running user code or kernel code
o Some instructions designated as privileged, only executable in kernel mode
o System call changes mode to kernel, return from call resets it to user

23. Transition from User to Kernel Mode


 Timer to prevent infinite loop / process hogging resources
 Set interrupt after specific period
 Operating system decrements counter
 When counter zero generate an interrupt
 Set up before scheduling process to regain control or terminate

24. Process Management

Kirby510’s Website Blog 12 August 2013


TSN2101 Operating Systems Revision Note

 A process is a program in execution. It is a unit of work within the system. Program is a


passive entity, process is an active entity.
 Process needs resources to accomplish its task
 CPU, memory, I/O, files
 Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of next
instruction to execute
 Process executes instructions sequentially, one at a time, until completion
 Multi-threaded process has one program counter per thread
 Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
 Concurrency by multiplexing the CPUs among the processes / threads
 Process Management Activities
 The operating system is responsible for the following activities in connection with
process management:
o Creating and deleting both user and system processes
o Suspending and resuming processes
o Providing mechanisms for process synchronization
o Providing mechanisms for process communication
o Providing mechanisms for deadlock handling

25. Memory Management


 All data in memory before and after processing
 All instructions in memory in order to execute
 Memory management determines what is in memory when
 Optimizing CPU utilization and computer response to users
 Memory management activities
 Keeping track of which parts of memory are currently being used and by whom
 Deciding which processes (or parts thereof) and data to move into and out of
memory
 Allocating and de-allocating memory space as needed

26. Storage Management


 OS provides uniform, logical view of information storage

Kirby510’s Website Blog 13 August 2013


TSN2101 Operating Systems Revision Note

 Abstracts physical properties to logical storage unit – file


 Each medium is controlled by device (i.e., disk drive, tape drive)
o Varying properties include access speed, capacity, data-transfer rate, access
method (sequential or random)
 File-System management
 Files usually organized into directories
 Access control on most systems to determine who can access what
 OS activities include
o Creating and deleting files and directories
o Primitives to manipulate files and dirs
o Mapping files onto secondary storage
o Backup files onto stable (non-volatile) storage media

27. Mass-Storage Management


 Usually disks used to store data that does not fit in main memory or data that must be
kept for a “long” period of time
 Proper management is of central importance
 Entire speed of computer operation hinges on disk subsystem and its algorithms
 OS activities
 Free-space management
 Storage allocation
 Disk scheduling
 Some storage need not be fast
 Tertiary storage includes optical storage, magnetic tape
 Still must be managed
 Varies between WORM (write-once, read-many-times) and RW (read- write)

28. Protection and Security


 Protection – any mechanism for controlling access of processes or users to resources
defined by the OS
 Security – defense of the system against internal and external attacks
 Huge range, including denial-of-service, worms, viruses, identity theft, theft of
service
 Systems generally first distinguish among users, to determine who can do what
 User identities (user IDs, security IDs) include name and associated number, one
per user
 User ID then associated with all files, processes of that user to determine access
control
 Group identifier (group ID) allows set of users to be defined and controls managed,
then also associated with each process, file
 Privilege escalation allows user to change to effective ID with more rights

Tutorial 01: Introduction to Operating Systems and Computer System


Structures

Kirby510’s Website Blog 14 August 2013


TSN2101 Operating Systems Revision Note

1. What is the purpose of booting in a computer? When does it start?

The purpose of booting is to load the kernel from secondary storage to primary memory and
initialize the registers, memory and I/O devices and their controllers. After the booting the
O/S is ready to accept commands and to provide services to application programs and users.

The booting starts when you switch on the power or resets the computer.

2. What are the three main purposes of an operating system?

 To provide an environment for a computer user to execute programs on computer


hardware in a convenient and efficient manner.
 To allocate the separate resources of the computer as needed to solve the problem given.
The allocation process should be as fair and efficient as possible.
 As a control program it serves two major functions: (1) supervision of the execution of
user programs to prevent errors and improper use of the computer, and (2) management
of the operation and control of I/O devices.

3. What are the resources of a computer? Why the O/S is called a resource allocator?

Resources of a computer are CPU cycles, memory Space, File Storage Space, I/O devices
and so on. The operating system acts as the manager of these resources. Facing numerous and
possibly conflicting requests for resources, the operating system must decide how to allocate
them to specific programs and users so that it can operate the computer system efficiently and
fairly.

4. List the four steps that are necessary to run a program on a completely dedicated machine —
a computer that is running only that program.

 Reserve machine time.


 Manually load program into memory.
 Load starting address and begin execution.
 Monitor and control execution of program from console.

5. How does the distinction between kernel mode and user mode function as a rudimentary
form of protection (security) system?

Kirby510’s Website Blog 15 August 2013


TSN2101 Operating Systems Revision Note

The distinction between kernel mode and user mode provides a rudimentary form of
protection in the following manner. Certain instructions could be executed only when the
CPU is in kernel mode.

Similarly, hardware devices could be accessed only when the program is executing in kernel
mode. Control over when interrupts could be enabled or disabled is also possible only when
the CPU is in kernel mode. Consequently, the CPU has very limited capability when
executing in user mode, thereby enforcing protection of critical resources.

6. What is the purpose of interrupts? What are the differences between a trap and an interrupt?
Can traps be generated intentionally by a user program? If so, for what purpose?

An interrupt is a hardware-generated change of flow within the system. An interrupt handler


is summoned to deal with the cause of the interrupt; control is then returned to the interrupted
context and instruction. A trap is a software-generated interrupt. An interrupt can be used to
signal the completion of an I/O to obviate the need for device polling. A trap can be used to
call operating system routines or to catch arithmetic errors.

7. Which of the following instructions should be privileged?


 Set value of timer.
 Read the clock.
 Clear memory.
 Issue a trap instruction.
 Turn off interrupts.
 Modify entries in device-status table.
 Switch from user to kernel mode.
 Access I/O device.

The following operations need to be privileged: Set value of timer, clear memory, turn off
interrupts, modify entries in device-status table, access I/O device. The rest can be performed
in user mode.

8. Describe the differences between symmetric and asymmetric multiprocessing. What are three
advantages and one disadvantage of multiprocessor systems?

Symmetric multiprocessing treats all processors as equals, and I/O can be processed on any
CPU. Asymmetric multiprocessing has one master CPU and the remainder CPUs are slaves.
The master distributes tasks among the slaves, and I/O is usually done by the master only.
Multiprocessors can save money by not duplicating power supplies, housings, and
peripherals. They can execute programs more quickly and can have increased reliability.
They are also more complex in both hardware and software than uniprocessor systems.

9. Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
 How does the CPU interface with the device to coordinate the transfer?

Kirby510’s Website Blog 16 August 2013


TSN2101 Operating Systems Revision Note

 How does the CPU know when the memory operations are complete?

The CPU is allowed to execute other programs while the DMA controller is transferring data.
Does this process interfere with the execution of the user programs? If so, describe what
forms of interference are caused.

The CPU can initiate a DMA operation by writing values into special registers that can be
independently accessed by the device. The device initiates the corresponding operation once
it receives a command from the CPU.

When the device is finished with its operation, it interrupts the CPU to indicate the
completion of the operation. Both the device and the CPU can be accessing memory
simultaneously. The memory controller provides access to the memory bus in a fair manner
to these two entities. A CPU might therefore be unable to issue memory operations at peak
speeds since it has to compete with the device in order to obtain access to the memory bus.

10. Give two reasons why caches are useful. What problems do they solve? What problems do
they cause? If a cache can be made as large as the device for which it is caching (for instance,
a cache as large as a disk), why not make it that large and eliminate the device?

 Caches are useful when two or more components need to exchange data, and the
components perform transfers at differing speeds.
 Caches solve the transfer problem by providing a buffer of intermediate speed between
the components. If the fast device finds the data it needs in the cache, it need not wait for
the slower device.
 The data in the cache must be kept consistent with the data in the components. If
a component has a data value change, and the datum is also in the cache, the cache must
also be updated. This is especially a problem on multiprocessor systems where more than
one process may be accessing a datum.
 A component may be eliminated by an equal-sized cache, but only if:
(a) the cache and the component have equivalent state-saving capacity (that is, if the
component retains its data when electricity is removed, the cache must retain data as
well), and
(b) the cache is affordable, because faster storage tends to be more expensive.

11. Consider an SMP system similar to what is shown below. Illustrate with an example how data
residing in memory could in fact have two different values in each of the local caches.

Kirby510’s Website Blog 17 August 2013


TSN2101 Operating Systems Revision Note

Say processor 1 reads data A with value 5 from main memory into its local cache. Similarly,
processor 2 reads data A into its local cache as well. Processor 1 then updates A to 10.
However, since A resides in processor 1’s local cache, the update only occurs there and not in
the local cache for processor 2.

12. What do you mean by BUS organized computer?

In a digital computer CPUs and multiple device controllers are connected through a common
bus, called system bus, to shared memory. For this reason it is called bus organized
computer. An analog computer is not a bus organized computer.

13. Describe briefly about memory hierarchy.

As the CPU Registers are very fast, memory should be compatible with them. Fast storage
tends to be large, expensive and power hungry. We can use levels of increasingly large (and
increasingly slow) storage, with the most likely information to be used soon stored at each
level. Often the information at each larger level is a superset of the information stored at the
next smaller level.

Memory levels get slower and larger as they get farther from the processor.

Kirby510’s Website Blog 18 August 2013


TSN2101 Operating Systems Revision Note

Lecture 02: Operating System Structures


1. Operating System Services
 Operating system services provides functions that are helpful to the user:
 User interface – Almost all operating systems have a user interface (UI)
o Varies between Command-Line (CLI), Graphics User Interface (GUI),
Batch
 Program execution – The system must be able to load a program into memory and
to run that program, end execution, either normally or abnormally (indicating error)
 I/O operations – A running program may require I/O, which may involve a file or an
I/O device
 File-system manipulation – The file system is of particular interest. Obviously,
programs need to read and write files and directories, create and delete them, search
them, list file Information, permission management.
 A View of Operating System Structure

 Communications – Processes may exchange information, on the same computer or


between computers over a network
 Communications may be via shared memory or through message passing (packets
moved by the OS)
 Error detection – OS needs to be constantly aware of possible errors
 May occur in the CPU and memory hardware, in I/O devices, in user program
 For each type of error, OS should take the appropriate action to ensure correct and
consistent computing
 Debugging facilities can greatly enhance the user’s and programmer’s abilities to
efficiently use the system

 Resource allocation – When multiple users or multiple jobs running concurrently,


resources must be allocated to each of them

Kirby510’s Website Blog 19 August 2013


TSN2101 Operating Systems Revision Note

 Many types of resources – Some (such as CPU cycles, main memory, and file
storage) may have special allocation code, others (such as I/O devices) may have
general request and release code
 Accounting – To keep track of which users use how much and what kinds of computer
resources
 Protection and security – The owners of information stored in a multiuser or
networked computer system may want to control use of that information, concurrent
processes should not interfere with each other
 Protection involves ensuring that all access to system resources is controlled
 Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts
 If a system is to be protected and secure, precautions must be instituted throughout
it. A chain is only as strong as its weakest link.

2. User Operating System Interface – CLI


 Command Line Interface (CLI) or command interpreter allows direct command entry
 Sometimes implemented in kernel, sometimes by systems program
 Sometimes multiple flavors implemented – shells
 Primarily fetches a command from user and executes it
o Sometimes commands built-in, sometimes just names of programs
 If the latter, adding new features doesn’t require shell modification

3. User Operating System Interface – GUI


 User-friendly desktop metaphor interface
 Usually mouse, keyboard, and monitor
 Icons represent files, programs, actions, etc.
 Various mouse buttons over objects in the interface cause various actions (provide
information, options, execute function, open directory (known as a folder)
 Invented at Xerox PARC
 Many systems now include both CLI and GUI interfaces
 Microsoft Windows is GUI with CLI “command” shell
 Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and shells
available
 Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)

4. Bourne Shell Command Interpreter

Kirby510’s Website Blog 20 August 2013


TSN2101 Operating Systems Revision Note

5. The Mac OS X GUI

6. System Calls

Kirby510’s Website Blog 21 August 2013


TSN2101 Operating Systems Revision Note

 Programming interface to the services provided by the OS


 Typically written in a high-level language (C or C++)
 Mostly accessed by programs via a high-level Application Program Interface (API)
rather than direct system call use
 Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java
API for the Java virtual machine (JVM)
 Why use APIs rather than system calls?
 Example of System Calls
 System call sequence to copy the contents of one file to another file

7. Example of Standard API

Kirby510’s Website Blog 22 August 2013


TSN2101 Operating Systems Revision Note

 Consider the ReadFile() function in the


 Win32 API—a function for reading from a file

 A description of the parameters passed to ReadFile()


 HANDLE file—the file to be read
 LPVOID buffer—a buffer where the data will be read into and written from
 DWORD bytesToRead—the number of bytes to be read into the buffer
 LPDWORD bytesRead—the number of bytes read during the last read
 LPOVERLAPPED ovl—indicates if overlapped I/O is being used

8. System Call Implementation


 Typically, a number associated with each system call
 System-call interface maintains a table indexed according to these numbers
 The system call interface invokes intended system call in OS kernel and returns status
of the system call and any return values
 The caller need know nothing about how the system call is implemented
 Just needs to obey API and understand what OS will do as a result call
 Most details of OS interface hidden from programmer by API
o Managed by run-time support library (set of functions built into libraries
included with compiler)

9. API – System Call – OS Relationship

Kirby510’s Website Blog 23 August 2013


TSN2101 Operating Systems Revision Note

10. Standard C Library Example


 C program invoking printf() library call, which calls write() system call

11. System Call Parameter Passing


 Often, more information is required than simply identity of desired system call

Kirby510’s Website Blog 24 August 2013


TSN2101 Operating Systems Revision Note

 Exact type and amount of information vary according to OS and call


 Three general methods used to pass parameters to the OS
 Simplest: pass the parameters in registers
o In some cases, may be more parameters than registers
 Parameters stored in a block, or table, in memory, and address of block passed as a
parameter in a register
o This approach taken by Linux and Solaris
 Parameters placed, or pushed, onto the stack by the program and popped off the
stack by the operating system
 Block and stack methods do not limit the number or length of parameters being
passed
 Parameter Passing via Table

12. Types of System Calls


 Process control
 File management
 Device management
 Information maintenance
 Communications
 Protection

13. Examples of Windows and Unix System Calls

Kirby510’s Website Blog 25 August 2013


TSN2101 Operating Systems Revision Note

14. System Programs

Kirby510’s Website Blog 26 August 2013


TSN2101 Operating Systems Revision Note

 System programs provide a convenient environment for program development and


execution. The can be divided into:
 File manipulation
o Create, delete, copy, rename, print, dump, list, and generally manipulate files
and directories
 Status information
o Some ask the system for info – date, time, amount of available memory, disk
space, number of users
o Others provide detailed performance, logging, and debugging information
o Typically, these programs format and print the output to the terminal or other
output devices
o Some systems implement a registry – used to store and retrieve configuration
information
 File modification
o Text editors to create and modify files
o Special commands to search contents of files or perform transformations of the
text
 Programming language support
o Compilers, assemblers, debuggers and interpreters sometimes provided
 Program loading and execution
o Absolute loaders, relocatable loaders, linkage editors, and overlay-loaders,
debugging systems for higher-level and machine language
 Communications
o Provide the mechanism for creating virtual connections among processes, users,
and computer systems
o Allow users to send messages to one another’s screens, browse web pages, send
electronic-mail messages, log in remotely, transfer files from one machine to
another
 Application programs
 Most users’ view of the operation system is defined by system programs, not the actual
system calls
 Provide a convenient environment for program development and execution
 Some of them are simply user interfaces to system calls; others are considerably
more complex

15. Simple Structure

Kirby510’s Website Blog 27 August 2013


TSN2101 Operating Systems Revision Note

 MS-DOS – written to provide the most functionality in the least space


 Not divided into modules
 Although MS-DOS has some structure, its interfaces and levels of functionality are
not well separated
 Layer Structure

16. UNIX

Kirby510’s Website Blog 28 August 2013


TSN2101 Operating Systems Revision Note

 UNIX – limited by hardware functionality, the original UNIX operating system had
limited structuring. The UNIX OS consists of two separable parts
 Systems programs
 The kernel
o Consists of everything below the system-call interface and above the physical
hardware
o Provides the file system, CPU scheduling, memory management, and other
operating-system functions; a large number of functions for one level
 Traditional UNIX System Structure

17. Layered Approach

Kirby510’s Website Blog 29 August 2013


TSN2101 Operating Systems Revision Note

 The operating system is divided into a number of layers (levels), each built on top of
lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the
user interface.
 With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers
 Layered Operating System

18. Microkernel System Structure


 Moves as much from the kernel into “user” space
 Communication takes place between user modules using message passing
 Benefits:
 Easier to extend a microkernel
 Easier to port the operating system to new architectures
 More reliable (less code is running in kernel mode)
 More secure
 Detriments:
 Performance overhead of user space to kernel space communication

19. Mac OS X Structure

Kirby510’s Website Blog 30 August 2013


TSN2101 Operating Systems Revision Note

20. Modules
 Most modern operating systems implement kernel modules
 Uses object-oriented approach
 Each core component is separate
 Each talks to the others over known interfaces
 Each is loadable as needed within the kernel
 Overall, similar to layers but with more flexible
 Solaris Modular Approach

21. Virtual Machines

Kirby510’s Website Blog 31 August 2013


TSN2101 Operating Systems Revision Note

 A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware
 A virtual machine provides an interface identical to the underlying bare hardware
 The operating system host creates the illusion that a process has its own processor and
(virtual memory)
 Each guest provided with a (virtual) copy of underlying computer

 VMware Architecture

Tutorial 02: Operating System Structures

Kirby510’s Website Blog 32 August 2013


TSN2101 Operating Systems Revision Note

1. What is the purpose of system calls?

System calls allow user-level processes to request services of the operating system.

2. List five services provided by an operating system, and explain how each creates
convenience for users. In which cases would it be impossible for user-level programs to
provide these services? Explain your answer.

The five main services are, not inclusive of accounting, and protection and security:

Program execution
The operating system loads the contents (or sections) of a file into memory and begins its
execution. A user level program could not be trusted to properly allocate CPU time.

I/O operations
Disks, tapes, serial lines, and other devices must be communicated with at a very low level.
The user need only specify the device and the operation to perform on it, while the system
converts that request into device- or controller-specific commands. User-level programs
cannot be trusted to access only devices they should have access to and to access them only
when they are otherwise unused.

File-system manipulation
There are many details in file creation, deletion, allocation, and naming that users should not
have to perform.
Blocks of disk space are used by files and must be tracked. Deleting a file requires removing
the name file information and freeing the allocated blocks. Protections must also be checked
to assure proper file access. User programs could neither ensure adherence to protection
methods nor be trusted to allocate only free blocks and de-allocate blocks on file deletion.

Communications
Message passing between systems requires messages to be turned into packets of
information, sent to the network controller, transmitted across a communications medium,
and reassembled by the destination system. Packet ordering and data correction must take
place. Again, user programs might not coordinate access to the network device, or they might
receive packets destined for other processes.

Error detection

Kirby510’s Website Blog 33 August 2013


TSN2101 Operating Systems Revision Note

Error detection occurs at both the hardware and software levels. At the hardware level, all
data transfers must be inspected to ensure that data have not been corrupted in transit. All
data on media must be checked to be sure they have not changed since they were written to
the media. At the software level, media must be checked for data consistency; for instance,
whether the numbers of allocated and unallocated blocks of storage match the total number
on the device. There, errors are frequently process independent (for instance, the corruption
of data on a disk), so there must be a global program (the operating system) that handles all
types of errors. Also, by having errors processed by the operating system, processes need not
contain code to catch and correct all the errors possible on a system.

3. What is the purpose of system programs?

System programs can be thought of as bundles of useful system calls. They provide basic
functionality to users so that users do not need to write their own programs to solve common
problems.

4. Describe three general methods for passing parameters to the operating system.

 Pass parameters in registers


 Registers pass starting addresses of blocks of parameters
 Parameters can be placed, or pushed, onto the stack by the program, and popped off the
stack by the operating system

5. Would it be possible for the user to develop a new command interpreter using the system-call
interface provided by the operating system?

A user should be able to develop a new command interpreter using the system-call interface
provided by the operating system. The command interpreter allows a user to create and
manage processes and also determine ways by which they communicate (such as through
pipes and files). As all of this functionality could be accessed by a user level program using
the system calls, it should be possible for the user to develop a new command-line
interpreter.

Kirby510’s Website Blog 34 August 2013


TSN2101 Operating Systems Revision Note

6. What is the main advantage of the microkernel approach to system design? How do user
programs and system services interact in a microkernel architecture? What are the
disadvantages of using the microkernel approach?

Benefits typically include the following:


a. adding a new service does not require modifying the kernel
b. it is more secure as more operations are done in user mode than in kernel mode
c. a simpler kernel design and functionality typically results in a more reliable operating
system

User programs and system services interact in a microkernel architecture by using


interprocess communication mechanisms such as messaging. These messages are conveyed
by the operating system. The primary disadvantages of the microkernel architecture are the
overheads associated with interprocess communication and the frequent use of the operating
system’s messaging functions in order to enable the user process and the system service to
interact with each other.

7. In what ways is the modular kernel approach similar to the layered approach? In what ways
does it differ from the layered approach?

The modular kernel approach requires subsystems to interact with each other through
carefully constructed interfaces that are typically narrow (in terms of the functionality that is
exposed to external modules). The layered kernel approach is similar in that respect.
However, the layered kernel imposes a strict ordering of subsystems such that subsystems at
the lower layers are not allowed to invoke operations corresponding to the upper-layer
subsystems. There are no such restrictions in the modular- kernel approach, wherein modules
are free to invoke each other without any constraints.

8. What is the relationship between a guest operating system and a host operating system in a
system like VMware? What factors need to be considered in choosing the host operating
system?

A guest operating system provides its services by mapping them onto the functionality
provided by the host operating system. A key issue that needs to be considered in choosing
the host operating system is whether it is sufficiently general in terms of its system-call
interface in order to be able to support the functionality associated with the guest operating
system.

Kirby510’s Website Blog 35 August 2013


TSN2101 Operating Systems Revision Note

Lecture 03: Processes and Threads


1. Process Concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution; process execution must progress in sequential
fashion
 A process includes:
 program counter
 stack
 data section

2. Process in Memory

3. Process State
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution
 Diagram of Process State

Kirby510’s Website Blog 36 August 2013


TSN2101 Operating Systems Revision Note

4. Process Control Block (PCB)


 Information associated with each process
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information

5. CPU Switch From Process to Process

Kirby510’s Website Blog 37 August 2013


TSN2101 Operating Systems Revision Note

6. Process Scheduling Queues


 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main memory, ready and waiting to
execute
 Device queues – set of processes waiting for an I/O device
 Processes migrate among the various queues

7. Ready Queue And Various I/O Device Queues

Kirby510’s Website Blog 38 August 2013


TSN2101 Operating Systems Revision Note

8. Representation of Process Scheduling

9. Schedulers

Kirby510’s Website Blog 39 August 2013


TSN2101 Operating Systems Revision Note

 Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
 Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
 Addition of Medium Term Scheduling

 Short-term scheduler is invoked very frequently (milliseconds)  (must be fast)


 Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow)
 The long-term scheduler controls the degree of multiprogramming
 Processes can be described as either:
 I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts
 CPU-bound process – spends more time doing computations; few very long CPU
bursts

10. Context Switch


 When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
 Context of a process represented in the PCB
 Context-switch time is overhead; the system does no useful work while switching

11. Process Creation

Kirby510’s Website Blog 40 August 2013


TSN2101 Operating Systems Revision Note

 Parent process create children processes, which, in turn create other processes,
forming a tree of processes
 Generally, process identified and managed via a process identifier (pid)
 Resource sharing
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
 Execution
 Parent and children execute concurrently
 Parent waits until children terminate
 Address space
 Child duplicate of parent
 Child has a program loaded into it
 UNIX examples
 fork system call creates new process
 exec system call used after a fork to replace the process’ memory space with a new
program

12. A tree of processes on a typical Solaris

Kirby510’s Website Blog 41 August 2013


TSN2101 Operating Systems Revision Note

13. Process Termination


 Process executes last statement and asks the operating system to delete it (exit)
 Output data from child to parent (via wait)
 Process’ resources are de-allocated by operating system
 Parent may terminate execution of children processes (abort)
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 If parent is exiting
o Some operating system do not allow child to continue if its parent terminates
 All children terminated – cascading termination

14. Interprocess Communication


 Processes within a system may be independent or cooperating

Kirby510’s Website Blog 42 August 2013


TSN2101 Operating Systems Revision Note

 Cooperating process can affect or be affected by other processes, including sharing data
 Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience
 Cooperating processes need interprocess communication (IPC)
 Two models of IPC
 Shared memory
 Message passing

15. Communications Models

16. Synchronization
 Message passing may be either blocking or non-blocking
 Blocking is considered synchronous
 Blocking send has the sender block until the message is received
 Blocking receive has the receiver block until a message is available
 Non-blocking is considered asynchronous
 Non-blocking send has the sender send the message and continue
 Non-blocking receive has the receiver receive a valid message or null

17. Single and Multithreaded Processes

Kirby510’s Website Blog 43 August 2013


TSN2101 Operating Systems Revision Note

 Benefits
 Responsiveness
 Resource Sharing
 Economy
 Scalability

18. Multicore Programming


 Multicore systems putting pressure on programmers, challenges include
 Dividing activities
 Balance
 Data splitting
 Data dependency
 Testing and debugging

19. Multithreaded Server Architecture

Kirby510’s Website Blog 44 August 2013


TSN2101 Operating Systems Revision Note

20. Concurrent Execution on a Single-core System

21. Parallel Execution on a Multicore System

22. User Threads


 Thread management done by user-level threads library
 Three primary thread libraries:
 POSIX Pthreads
 Win32 threads
 Java threads

23. Kernel Threads


 Supported by the Kernel

Kirby510’s Website Blog 45 August 2013


TSN2101 Operating Systems Revision Note

 Examples
 Windows XP/2000
 Solaris
 Linux
 Tru64 UNIX
 Mac OS X

24. Multithreading Models


 Many-to-One Model
 Many user-level threads mapped to single kernel thread
 Examples:
o Solaris Green Threads
o GNU Portable Threads

 One-to-One Model
 Each user-level thread maps to kernel thread
 Examples
o Windows NT/XP/2000
o Linux
o Solaris 9 and later

 Many-to-Many Model
 Allows many user level threads to be mapped to many kernel threads

Kirby510’s Website Blog 46 August 2013


TSN2101 Operating Systems Revision Note

 Allows the operating system to create a sufficient number of kernel threads


 Solaris prior to version 9
 Windows NT/2000 with the ThreadFiber package

25. Two-level Model


 Similar to Many-to-Many Model, except that it allows a user thread to be bound to
kernel thread
 Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

26. Thread Libraries


 Thread library provides programmer with API for creating and managing threads
 Two primary ways of implementing
 Library entirely in user space
 Kernel-level library supported by the OS

27. Thread Cancellation


 Terminating a thread before it has finished
 Two general approaches:

Kirby510’s Website Blog 47 August 2013


TSN2101 Operating Systems Revision Note

 Asynchronous cancellation terminates the target thread immediately


 Deferred cancellation allows the target thread to periodically check if it should be
cancelled

28. Signal Handling


 Signals are used in UNIX systems to notify a process that a particular event has
occurred
 A signal handler is used to process signals
i. Signal is generated by particular event
ii. Signal is delivered to a process
iii. Signal is handled
 Options:
 Deliver the signal to the thread to which the signal applies
 Deliver the signal to every thread in the process
 Deliver the signal to certain threads in the process
 Assign a specific thread to receive all signals for the process

29. Thread Pools


 Create a number of threads in a pool where they await work
 Advantages:
 Usually slightly faster to service a request with an existing thread than create a new
thread
 Allows the number of threads in the application(s) to be bound to the size of the
pool

30. Thread Specific Data


 Allows each thread to have its own copy of data
 Useful when you do not have control over the thread creation process (i.e., when using
a thread pool)

Tutorial 03: Processes and Threads


1. What are the process elements in a process control block (PCB)?

Kirby510’s Website Blog 48 August 2013


TSN2101 Operating Systems Revision Note

At any given point in time, while a process in execution, this process can be uniquely
characterized by a number of elements that are known as process elements. They are given as
follows:

 Identifier: A unique identifier associated with this process, to distinguish it from all other
processes.
 State: If the process is currently executing, it is in the running state.
 Priority: Priority Level relative to other processes.
 Program counter: The address of the next instruction in the program to be executed.
 Memory pointers: Includes pointers to the program code and data associated with this
process, plus any memory blocks shared with other processes.
 Context data: These are data that are present in registers in the processor while the
process is executing.
 I/O status information: Includes outstanding I/O requests, I/O devices (e.g., tape drives)
assigned to this process, a list of files in use by the process, and so on.
 Accounting information: May include the amount of processor time and clock time used,
time limits, account numbers, and so on.

2. Describe the actions taken by a kernel to context-switch between processes.

In general, the operating system must save the state of the currently running process and
restore the state of the process scheduled to be run next. Saving the state of a process
typically includes the values of all the CPU registers in addition to memory allocation.
Context switches must also perform many architecture-specific operations, including flushing
data and instruction caches.

3. Describe the differences among short-term, medium-term, and long term scheduling.

 Short-term (CPU scheduler): selects from jobs in memory those jobs that are ready to
execute and allocates the CPU to them.
 Medium-term: used especially with time-sharing systems as an intermediate scheduling
level. A swapping scheme is implemented to remove partially run programs from
memory and reinstate them later to continue where they left off.
 Long-term (job scheduler): determines which jobs are brought into memory for
processing.

The primary difference is in the frequency of their execution. The short term must select a
new process quite often. Long-term is used much less often since it handles placing jobs in
the system and may wait a while

4. What are two differences between user-level threads and kernel-level threads? Under what
circumstances is one type better than the other?

Kirby510’s Website Blog 49 August 2013


TSN2101 Operating Systems Revision Note

 User-level threads are unknown by the kernel, whereas the kernel is aware of kernel
threads.
 On systems using either M:1 or M:N mapping, user threads are scheduled by the thread
library and the kernel schedules kernel threads.
 Kernel threads need not be associated with a process whereas every user thread belongs
to a process. Kernel threads are generally more expensive to maintain than user threads as
they must be represented with a kernel data structure.

5. What resources are used when a thread is created? How do they differ from those used when
a process is created?

Because a thread is smaller than a process, thread creation typically uses fewer resources
than process creation. Creating a process requires allocating a process control block (PCB), a
rather large data structure.

The PCB includes a memory map, list of open files, and environment variables. Allocating
and managing the memory map is typically the most time-consuming activity. Creating either
a user or kernel thread involves allocating a small data structure to hold a register set, stack,
and priority.

6. Describe the actions taken by a kernel to context-switch between kernel level threads.

Context switching between kernel threads typically requires saving the value of the CPU
registers from the thread being switched out and restoring the CPU registers of the new
thread being scheduled.

7. What are the benefits of multithreaded system?

 Increased responsiveness – Even if one thread is blocked, other threads can still run.
 Resource sharing: – Code sharing, and memory sharing.
 Economy – Creation of threads uses fewer resources as compared to processes.
 Taking advantage of multiprocessors – Threads can run parallel on different processors.

8. Draw the process state diagram depicting the five states that a process can be in.

Kirby510’s Website Blog 50 August 2013


TSN2101 Operating Systems Revision Note

 New: The process is being created.


 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur.
 Ready: The process is waiting to be assigned to a process.
 Terminated: The process has finished execution.

9. Which of the following components of program state are shared across threads in a
multithreaded process?
a) Register values
b) Heap memory
c) Global variables
d) Stack memory

The threads of a multithreaded process share heap memory and global variables. Each thread
has its separate set of register values and a separate stack.

Kirby510’s Website Blog 51 August 2013


TSN2101 Operating Systems Revision Note

Lecture 04: CPU Scheduling


1. Basic Concepts
 Maximum CPU utilization obtained with multiprogramming
 CPU – I/O Burst Cycle – Process execution consists of a cycle of CPU execution and
I/O wait
 CPU burst distribution

2. Alternating Sequence of CPU And I/O Bursts

Kirby510’s Website Blog 52 August 2013


TSN2101 Operating Systems Revision Note

3. CPU Scheduler
 Selects from among the processes in memory that are ready to execute, and allocates
the CPU to one of them
 CPU scheduling decisions may take place when a process:
i. Switches from running to waiting state
ii. Switches from running to ready state
iii. Switches from waiting to ready
iv. Terminates
 Scheduling under 1 and 4 is non-preemptive
 All other scheduling is preemptive

4. Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
 Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

5. Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

6. Scheduling Algorithm Optimization Criteria


 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

7. First-Come, First-Served (FCFS) Scheduling

Kirby510’s Website Blog 53 August 2013


TSN2101 Operating Systems Revision Note

a. Case 1
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
 The Gantt Chart for the schedule is:

 P1=0 ; P2=24 ; P3=27


Waiting time for
( 0+24+27 )
 Average waiting time: =17
3
b. Case 2
 Suppose that the processes arrive in the order: P2 , P3 , P1
 The Gantt chart for the schedule is:

 Waiting time for P1=6 ; P2=0 ; P3=3


(6 +0+3 )
 Average waiting time: =3
3
 Much better than previous case
 Convoy effect short process behind long process

Kirby510’s Website Blog 54 August 2013


TSN2101 Operating Systems Revision Note

8. Shortest-Job-First (SJF) Scheduling


 Associate with each process the length of its next CPU burst.
Use these lengths to schedule the process with the shortest time
 SJF is optimal – gives minimum average waiting time for given set of processes
 The difficulty is knowing the length of the next CPU request
 Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
 SJF scheduling chart

(3+ 16+9+0 )
 Average waiting time: =7
4

9. Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
 Preemptive
 Non-preemptive
 SJF is a priority scheduling where priority is the predicted next CPU burst time
 Problem ≡ Starvation – low priority processes may never execute
 Solution ≡ Aging – as time progresses increase the priority of the process

Kirby510’s Website Blog 55 August 2013


TSN2101 Operating Systems Revision Note

10. Round Robin (RR)


 Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.
 If there are n processes in the ready queue and the time quantum is q , then each
1
process get of the CPU time in chunks of at most q time units at once. No
n
process waits more than ( n−1 ) q time units.
 Performance
 q large ⇒ First-In, First Out (FIFO)
 q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
 Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
 The Gantt chart is:

 Typically, higher average turnaround than SJF, but better response

11. Time Quantum and Context Switch Time

Kirby510’s Website Blog 56 August 2013


TSN2101 Operating Systems Revision Note

12. Turnaround Time Varies With The Time Quantum

13. Multilevel Queue


 Ready queue is partitioned into separate queues:
 foreground (interactive)
 background (batch)
 Each queue has its own scheduling algorithm
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues
 Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
 20% to background in FCFS

Kirby510’s Website Blog 57 August 2013


TSN2101 Operating Systems Revision Note

 Multilevel Queue Scheduling

14. Multilevel Feedback Queue


 A process can move between the various queues; aging can be implemented this way
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that process needs
service
 Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q 2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to
queue Q1 .
 At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q 2 .

Kirby510’s Website Blog 58 August 2013


TSN2101 Operating Systems Revision Note

15. Thread Scheduling


 Distinction between user-level and kernel-level threads
 Many-to-one and many-to-many models, thread library schedules user-level threads to
run on LWP
 Known as process-contention scope (PCS) since scheduling competition is within
the process
 Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system

16. Multiple-Processor Scheduling


 CPU scheduling more complex when multiple CPUs are available
 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes
in common ready queue, or each has its own private queue of ready processes
 Processor affinity – process has affinity for processor on which it is currently running
 soft affinity
 hard affinity

Kirby510’s Website Blog 59 August 2013


TSN2101 Operating Systems Revision Note

17. NUMA and CPU Scheduling

18. Multicore Processors


 Recent trend to place multiple processor cores on same physical chips
 Faster and consume less power
 Multiple threads per core also growing
 Takes advantage of memory stall to make progress on another thread while memory
retrieve happens

19. Multithreaded Multicore System

Tutorial 04: CPU Scheduling

Kirby510’s Website Blog 60 August 2013


TSN2101 Operating Systems Revision Note

1. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound
programs?

I/O-bound programs have the property of performing only a small amount of computation
before performing I/O. Such programs typically do not use up their entire CPU quantum.
CPU-bound programs, on the other hand, use their entire quantum without performing any
blocking I/O operations. Consequently, one could make better use of the computer’s
resources by giving higher priority to I/O-bound programs and allow them to execute ahead
of the CPU-bound programs.

2. What is the difference between preemptive and non-preemptive scheduling?

Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking
the CPU away and allocating it to another process. Non-preemptive scheduling ensures that a
process relinquishes control of the CPU only when it finishes with its current CPU burst.

3. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:

Process Burst Time Arrival Time


P1 10 0
P2 2 1
P3 3 2
P4 1 3
P5 5 4

a) Draw the Gantt charts illustrating the execution of these processes using FCFS, Non-
Preemptive SJF and RR (quantum=2) scheduling.
b) What is the turnaround time of each process for each of the scheduling algorithms?
c) What is the waiting time of each process for each of the scheduling algorithms?
d) Which of the schedules in Q3 results in the minimal average waiting time (over all
processes)?

a)

Kirby510’s Website Blog 61 August 2013


TSN2101 Operating Systems Revision Note

b) Turnaround Time = Finish Time – Arrival Time

FCFS RR SJF
P1 10 21 10
P2 11 3 12
P3 13 10 14
P4 13 6 8
P5 17 15 17

c) Waiting Time = Turnaround Time – Burst Time

FCFS RR SJF
P1 0 11 0
P2 9 1 10
P3 10 7 11
P4 12 5 7
P5 12 10 12

d) Average Waiting Time:

0+9+10+12+12 43
FCFS: = =8.6
5 5
11 +1+ 7+5+10 34
RR: = =6.8 (Minimum)
5 5
0+10+11+7+12 40
SJF: = =8
5 5

Kirby510’s Website Blog 62 August 2013


TSN2101 Operating Systems Revision Note

4. Explain the difference in the degree to which the following scheduling algorithms
discriminate in favor of short processes.
a) FCFS
b) RR

a) FCFS – discriminates against short jobs since any short jobs arriving after long jobs will
have a longer waiting time.
b) RR – treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be
able to leave the system faster since they will finish first.

5. Consider the following processes with the length of a CPU burst time given in milliseconds.
The processes are arrived according to the arrival time. (Low numbers represent high
priority)

Process Arrival Time Priority Burst Time


P0 0 2 8
P5 0 1 6
P1 4 5 15
P4 9 4 13
P2 7 3 9
P3 13 1 5

a) Draw 3 (three) Gantt Charts illustrating the execution of these processors using
scheduling.
i. Non-preemptive SJF
ii. Preemptive priority scheduling
iii. Round Robin (quantum=3) scheduling
b) Calculate the average waiting time for Preemptive priority and Round Robin Scheduling

a)
i. Non-preemptive SJF

ii. Preemptive priority scheduling

iii. Round Robin (quantum = 3) scheduling

Kirby510’s Website Blog 63 August 2013


TSN2101 Operating Systems Revision Note

b)
Waiting Time (Preemptive priority)
P0 : ( 6−0 )+ ( 18−13 ) =6+5=11
P1 : ( 41−4 )=37
P2 : ( 19−7 )=12
P3 : ( 13−13 )=0
P4 : ( 28−9 )=19
P5 : ( 0−0 )=0

11+37+ 12+ 0+19+0 79


Average waiting time= =
6 6
¿ 13.17

Waiting Time (Round Robin)


P0 : (3−0 )+ (12−6 ) + ( 27−15 )=3+6+12=21
P1 : ( 9−4 ) + ( 21−12 )+ (35−24 )+ ( 46−38 ) + ( 52−49 )
¿ 5+9+11+ 8+3=36
P2 : ( 15−7 ) + ( 29−18 ) + ( 40−32 )=8+ 11+8=27
P3 : ( 24−13 )+ ( 38−27 )=11+11=22
P4 : ( 18−9 ) + ( 32−21 ) + ( 43−35 )+ ( 49−46 ) + ( 55−52 )
¿ 9+11+ 8+3+3=34
P5 : ( 0−0 )+ ( 6−3 )=0+3=3

21+36+27+ 22+ 34+3 143


Average waiting time= =
6 6
¿ 23.83

Kirby510’s Website Blog 64 August 2013


TSN2101 Operating Systems Revision Note

Lecture 05: Process Synchronization


1. Background of Process Synchronization
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes
 Suppose that we wanted to provide a solution to the consumer-producer problem that
fills all the buffers. We can do so by having an integer count that keeps track of the
number of full buffers. Initially, count is set to 0. It is incremented by the producer after
it produces a new buffer and is decremented by the consumer after it consumes a buffer.

2. Producer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE); // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}

3. Consumer
while (true) {
while (count == 0); // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed */
}

Kirby510’s Website Blog 65 August 2013


TSN2101 Operating Systems Revision Note

4. Race Condition
 count++ could be implemented as

register1 = count
register1 = register1 + 1
count = register1

 count-- could be implemented as

register2 = count
register2 = register2 – 1
count = register2

 Consider this execution interleaving with “count = 5” initially:

S0: producer execute register1 = count {register1 = 5}


S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute count = register1 {count = 6}
S5: consumer execute count = register2 {count = 4}

5. Solution to Critical-Section Problem


i. Mutual Exclusion – If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
ii. Progress – If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes that
will enter the critical section next cannot be postponed indefinitely
iii. Bounded Waiting – A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes

6. Peterson’s Solution
 Two process solution
 Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
 The two processes share two variables:
 int turn;
 Boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical section.
 The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = TRUE implies that process Pi is ready!

7. Algorithm for Process Pi

Kirby510’s Website Blog 66 August 2013


TSN2101 Operating Systems Revision Note

do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);

8. Synchronization Hardware
 Many systems provide hardware support for critical section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
o Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
o Atomic = non-interruptable

9. Solution to Critical-Section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

10. Semaphore
 Synchronization tool that does not require busy waiting

Kirby510’s Website Blog 67 August 2013


TSN2101 Operating Systems Revision Note

 Semaphore S – integer variable


 Two standard operations modify S: wait() and signal()
 Less complicated
 Can only be accessed via two indivisible (atomic) operations:
 Lock

wait (s) {
while s <= 0; // no-op
s--;
}

 Unlock

signal (s) {
s++;
}

11. Semaphore as General Synchronization Tool


 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0 and 1; can be simpler to
implement
 Also known as mutex locks
 Can implement a counting semaphore S as a binary semaphore
 Provides mutual exclusion

Semaphore mutex; // initialized to 1


do {
wait (mutex);
// critical Section signal (mutex);
// remainder section
} while (TRUE);

12. Semaphore Implementation


 Must guarantee that no two processes can execute wait() and signal() on the
same semaphore at the same time
 Thus, implementation becomes the critical section problem where the wait and signal
code are placed in the critical section.
 Could now have busy waiting in critical section implementation
o But implementation code is short
o Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and therefore this is
not a good solution.

13. Semaphore Implementation with no Busy waiting

Kirby510’s Website Blog 68 August 2013


TSN2101 Operating Systems Revision Note

 With each semaphore there is an associated waiting queue. Each entry in a waiting
queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the appropriate waiting queue.
 wakeup – remove one of processes in the waiting queue and place it in the ready
queue.
 Implementation of wait:

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

 Implementation of signal:

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}

14. Classical Problems of Synchronization


 Bounded-Buffer Problem

Kirby510’s Website Blog 69 August 2013


TSN2101 Operating Systems Revision Note

 N buffers, each can hold one item


 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N.
 The structure of the producer process

do {

// produce an item in nextp

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
} while (TRUE);

 The structure of the consumer process

do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc signal (mutex);
signal (empty);

// consume the item in nextc

} while (TRUE);

 Readers-Writers Problem
 A data set is shared among a number of concurrent processes
o Readers – only read the data set; they do not perform any updates

Kirby510’s Website Blog 70 August 2013


TSN2101 Operating Systems Revision Note

o Writers – can both read and write


 Problem – allow multiple readers to read at the same time. Only one single writer
can access the shared data at the same time
 Shared Data
o Data set
o Semaphore mutex initialized to 1
o Semaphore wrt initialized to 1
o Integer readcount initialized to 0
 The structure of a writer process

do {
wait (wrt);

// writing is performed

signal (wrt);

} while (TRUE);

 The structure of a reader process

do {
wait (mutex);
readcount++;
if (readcount == 1)
wait (wrt);
signal (mutex)
// reading is performed wait (mutex);
readcount--;
if (readcount == 0)
signal (wrt);
signal (mutex);
} while (TRUE);

 Dining-Philosophers Problem

Kirby510’s Website Blog 71 August 2013


TSN2101 Operating Systems Revision Note

 Shared data
o Bowl of rice (data set)
o Semaphore chopstick [5] initialized to 1
 The structure of Philosopher i:

do {
wait (chopstick[i]);
wait (chopstick[(i + 1) % 5]);

// eat

signal (chopstick[i]);
signal (chopstick[(i + 1) % 5]);

// think

} while (TRUE);

15. Problem with Semaphores


 Correct use of semaphore operations:
 signal (mutex) ... wait (mutex)
 wait (mutex) ... wait (mutex)
 omitting of wait (mutex) or signal (mutex) (or both)

16. Monitors

Kirby510’s Website Blog 72 August 2013


TSN2101 Operating Systems Revision Note

 A high-level abstraction that provides a convenient and effective mechanism for


process synchronization
 Only one process may be active within the monitor at a time

monitor monitor-name
{
// shared variable declarations
procedure P1 (...) {......}
...
procedure Pn (...) {......}
...
initialization code (...) {
...
}
}

 Schematic view of a Monitor

17. Synchronization Examples

Kirby510’s Website Blog 73 August 2013


TSN2101 Operating Systems Revision Note

 Solaris Synchronization
 Implements a variety of locks to support multitasking, multithreading (including
real-time threads), and multiprocessing
 Uses adaptive mutexes for efficiency when protecting data from short code
segments
 Uses condition variables and readers-writers locks when longer sections of code
need access to data
 Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock
 Windows XP Synchronization
 Uses interrupt masks to protect access to global resources on uniprocessor systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act as either mutexes and semaphores
 Dispatcher objects may also provide events
o An event acts much like a condition variable
 Linux Synchronization
 Linux:
o Prior to kernel Version 2.6, disables interrupts to implement short critical
sections
o Version 2.6 and later, fully preemptive
 Linux provides:
o semaphores
o spin locks

Tutorial 05: Process Synchronization

Kirby510’s Website Blog 74 August 2013


TSN2101 Operating Systems Revision Note

1. What three conditions must be satisfied in order to solve the critical section problem?

In a solution to the critical section problem, no thread may be executing in its critical section
if a thread is currently executing in its critical section. Furthermore, only those threads that
are not executing in their critical sections can participate in the decision on which process
will enter its critical section next. Finally, a bound must exist on the number of times that
other threads are allowed to enter their critical state after a thread has made a request to enter
its critical state.

2. Write two short methods that implement the simple semaphore wait() and signal()
operations on global variable, S.

wait (S) {
while (S <= 0);
S--;
}

signal (S) {
S++;
}

3. Describe the dining-philosophers problem and how it relates to operating systems.

The scenario involves five philosophers sitting at a round table with a bowl of food and five
chopsticks. Each chopstick sits between two adjacent philosophers. The philosophers are
allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and
only five chopsticks exist at the table, no two adjacent philosophers may be eating at the
same time. A scheduling problem arises as to who gets to eat at what time. This problem is
similar to the problem of scheduling processes that require a limited number of resources.

Kirby510’s Website Blog 75 August 2013


TSN2101 Operating Systems Revision Note

4. Race conditions are possible in many computer systems. Consider a banking system with the
following two functions: deposit(amount) and withdraw(amount). These two
functions are passed the amount that is to be deposited or withdrawn from a bank account.
Assume a shared bank account exists between a husband and wife and concurrently the
husband calls the withdraw() function and the wife calls deposit(). Describe how a
race condition is possible and what might be done to prevent the race condition from
occurring.

Assume the balance in the account is 250.00 and the husband calls withdraw(50) and the
wife calls deposit(100). Obviously the correct value should be 300.00 Since these two
transactions will be serialized, the local value of balance for the husband becomes 200.00,
but before he can commit the transaction, the deposit(100) operation takes place and
updates the shared value of balance to 300.00 We then switch back to the husband and the
value of the shared balance is set to 200.00 - obviously an incorrect value.

5. How does the signal() operation associated with monitors differ from the corresponding
operation defined for semaphores?

The signal() operation associated with monitors is not persistent in the following sense:
if a signal is performed and if there are no waiting threads, then the signal is simply ignored
and the system does not remember that the signal took place. If a subsequent wait operation
is performed, then the corresponding thread simply blocks. In semaphores, on the other hand,
every signal results in a corresponding increment of the semaphore value even if there are no
waiting threads. A future wait operation would immediately succeed because of the earlier
increment.

Kirby510’s Website Blog 76 August 2013


TSN2101 Operating Systems Revision Note

6. The following program segment is used to manage a finite number of instances of an


available resource. The maximum number of resources and the number of available resources
are declared as follows:

#define MAX_RESOURCES 5
int available_resources = MAX_RESOURCES;

When a process wishes to obtain a number of resources, it invokes the


decrease_count()
function:

/* decrease available_resources by count resources */


/* return 0 if sufficient resources available, */
/* otherwise return -1 */
int decrease_count(int count) {
if (available_resources < count)
return -1;
else {
available_resources -= count;
return 0;
}
}

When a process wants to return a number of resources, it calls the increase_count()


function:

/* increase available_resources by count */


int increase_count(int count) {
available_resources += count;
return 0;
}

The preceding program segment produces a race condition. Do the following:


a. Identify the data involved in the race condition.
b. Identify the location (or locations) in the code where the race condition occurs.
c. Using a semaphore, fix the race condition.

a. The variable available_resources.


b. The code that decrements available_resources and the code that increments
available_resources are the statements that could be involved in race conditions:
c. Use a semaphore to represent the available_resources variable and replace
increment and decrement operations by semaphore increment and semaphore decrement
operations.

Kirby510’s Website Blog 77 August 2013


TSN2101 Operating Systems Revision Note

Lecture 06: Deadlocks


1. The Deadlock Problem
 A set of blocked processes each holding a resource and waiting to acquire a resource
held by another process in the set
 Example
 System has 2 disk drives
 P1 and P2 each hold one disk drive and each needs another one
 Example
 semaphores A and B, initialized to 1
P0 P1
wait(A); wait(B);
wait(B); wait(A);

2. Bridge Crossing Example

 Traffic only in one direction


 Each section of a bridge can be viewed as a resource
 If a deadlock occurs, it can be resolved if one car backs up (preempt resources and
rollback)
 Several cars may have to be backed up if a deadlock occurs
 Starvation is possible
 Note – Most Operating Systems do not prevent or deal with deadlocks

3. System Model
 Resource types R1 , R2 , … , R m
CPU cycles, memory space, I/O devices
 Each resource type Ri has W i instances.
 Each process utilizes a resource as follows:
 request
 use
 release

Kirby510’s Website Blog 78 August 2013


TSN2101 Operating Systems Revision Note

4. Deadlock Characterization
 Deadlock can arise if four conditions hold simultaneously:
 Mutual exclusion: only one process at a time can use a resource
 Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes
 No preemption: a resource can be released only voluntarily by the process holding
it, after that process has completed its task
 Circular wait: there exists a set { P0 , P 1 , … , P0 } of waiting processes such that
P0 is waiting for a resource that is held by P1 , P1 is waiting for a resource
that is held by P2 ,… , Pn−1 is waiting for a resource that is held by Pn , and
P0 is waiting for a resource that is held by P0 .

5. Resource-Allocation Graph
 A set of vertices V and a set of edges E
 V is partitioned into two types:
 P= { P1 , P2 , … , Pn } , the set consisting of all the processes in the system
 R= { R 1 , R2 , … , R m } , the set consisting of all resource types in the system
 request edge – directed edge Pi  R j
 assignment edge – directed edge R j  Pi
 Process

 Resource Type with 4 instances

 Pi requests instance of Rj

 Pi is holding an instance of Rj

Kirby510’s Website Blog 79 August 2013


TSN2101 Operating Systems Revision Note

 Example of a Resource-Allocation Graph

 Resource-Allocation Graph With A Deadlock

Kirby510’s Website Blog 80 August 2013


TSN2101 Operating Systems Revision Note

 Resource-Allocation Graph With A Cycle But No Deadlock

 Basic Facts
 If graph contains no cycles  no deadlock
 If graph contains a cycle 
o if only one instance per resource type, then deadlock
o if several instances per resource type, possibility of deadlock

6. Methods for Handling Deadlocks


 Ensure that the system will never enter a deadlock state
 Allow the system to enter a deadlock state and then recover
 Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX

Kirby510’s Website Blog 81 August 2013


TSN2101 Operating Systems Revision Note

7. Deadlock Prevention
 Restrain the ways request can be made
 Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources
 Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources
o Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none
o Low resource utilization; starvation possible
 No Preemption –
o If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are
released
o Preempted resources are added to the list of resources for which the process is
waiting
o Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting
 Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration

8. Deadlock Avoidance
 Requires that the system has some additional a priori information available
 Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need
 The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes

Kirby510’s Website Blog 82 August 2013


TSN2101 Operating Systems Revision Note

9. Safe State
 When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
 System is in safe state if there exists a sequence ¿ P1 , P2 , … , Pn >¿ of ALL the
processes is the systems such that for each Pi , the resources that Pi can still
request can be satisfied by currently available resources +¿ resources held by all the
P j , with j<i
 That is:
 If Pi resource needs are not immediately available, then Pi can wait until all
P j have finished
 When P j is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate
 When Pi terminates, Pi+1 can obtain its needed resources, and so on
 Basic Facts
 If a system is in safe state  no deadlocks
 If a system is in unsafe state  possibility of deadlock
 Avoidance  ensure that a system will never enter an unsafe state.

10. Safe, Unsafe, Deadlock State

11. Avoidance Algorithms


 Single instance of a resource type
 Use a resource-allocation graph
 Multiple instances of a resource type
 Use the banker’s algorithm

Kirby510’s Website Blog 83 August 2013


TSN2101 Operating Systems Revision Note

12. Resource-Allocation Graph Scheme


 Claim edge Pi  R j indicated that process P j may request resource R j ;
represented by a dashed line
 Claim edge converts to request edge when a process requests a resource
 Request edge converted to an assignment edge when the resource is allocated to the
process
 When a resource is released by a process, assignment edge reconverts to a claim edge
 Resources must be claimed a priori in the system
 Resource-Allocation Graph

13. Unsafe State in Resource-Allocation Graph

14. Resource-Allocation Graph Algorithm


 Suppose that process Pi requests a resource R j
 The request can be granted only if converting the request edge to an assignment edge
does not result in the formation of a cycle in the resource allocation graph

Kirby510’s Website Blog 84 August 2013


TSN2101 Operating Systems Revision Note

15. Banker’s Algorithm


 Multiple instances
 Each process must a priori claim maximum use
 When a process requests a resource it may have to wait
 When a process gets all its resources it must return them in a
 finite amount of time
 Data Structures for the Banker’s Algorithm
 Let n=number of processes , and m=number of resourcestypes .
o Available: Vector of length m . If Available [ i , j ] =k , there are k
instances of resource type R j available
o Max: n ×m matrix. If Max [ i , j ] =k , then process Pi may request at
most k instances of resource type R j
o Allocation: n ×m matrix. If Allocation [ i , j ] =k then Pi is currently
allocated k instances of R j
o Need: n ×m matrix. If Need [ i, j ]=k , then Pi may need k more
instances of R j to complete its task
Need [ i, j ]=Max [ i , j ] −Allocation [ i , j ]

16. Safety Algorithm


i. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work= Available
Finish [ i ] =false for i=0,1 ,… ,n−1
ii. Find and i such that both:
(a) Finish [ i ] =false
(b) Needi ≤Work
If no such i exists, go to step 4
iii. Work=Work + Allocation i
Finish [ i ] =true
go to step 2
iv. If Finish [ i ] =¿ true for all i , then the system is in a safe state

17. Resource-Request Algorithm for Process Pi


 Request=request vector for process Pi . If Request i [ j ] =k then process Pi
wants k instances of resource type R j
i. If Requesti ≤ Need i go to step 2. Otherwise, raise error condition, since process
has exceeded its maximum claim
ii. If Requesti ≤ Available , go to step 3. Otherwise Pi must wait, since resources
are not available
iii. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available =Available−Request ;
Allocation i= Allocation i+ Request i ;
Needi =Need i−Request i ;
o If safe  the resources are allocated to Pi
o If unsafe  Pi must wait, and the old resource-allocation state is restored

Kirby510’s Website Blog 85 August 2013


TSN2101 Operating Systems Revision Note

18. Example of Banker’s Algorithm


 5 processes P0 through P4 ;
 3 resource types:
A (10 instances), B (5 instances), and C (7 instances)
 Snapshot at time T 0 :
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
 The content of the matrix Need is defined to be Max− Allocation
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
 The system is in a safe state since the sequence ¿ P1 , P3 , P 4 , P2 , P 0 >¿ satisfies safety
criteria

19. Example: Pi Request ( 1,0,2 )


 Check that Request ≤ Available (that is, ( 1,0,2 ) ≤ ( 3,3,2 )  true)
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 1 600
P3 2 1 1 011
P4 0 0 2 431
 Executing safety algorithm shows that sequence ¿ P1 , P3 , P 4 , P0 , P2 >¿ satisfies
safety requirement
 Can request for ( 3,3,0 ) by P4 be granted?
 Can request for ( 0,2,0 ) by P0 be granted?

20. Deadlock Detection


 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme

Kirby510’s Website Blog 86 August 2013


TSN2101 Operating Systems Revision Note

21. Single Instance of Each Resource Type


 Maintain wait-for graph
 Nodes are processes
 Pi → P j if Pi is waiting for P j
 Periodically invoke an algorithm that searches for a cycle in the graph. If there is a
cycle, there exists a deadlock
 An algorithm to detect a cycle in a graph requires an order of n2 operations, where n
is the number of vertices in the graph

22. Resource-Allocation Graph and Wait-for Graph

23. Several Instances of a Resource Type


 Available: A vector of length m indicates the number of available resources of each
type.
 Allocation: An n ×m matrix defines the number of resources of each type currently
allocated to each process.
 Request: An n ×m matrix indicates the current request of each process. If
Request [ i j ] =k , then process Pi is requesting k more instances of resource type,
Rj .

Kirby510’s Website Blog 87 August 2013


TSN2101 Operating Systems Revision Note

24. Detection Algorithm


i. Let Work and Finish be vectors of length m and n , respectively Initialize:
(a) Work= Available
(b) For i=1,2 , … ,n , if Allocation i ≠0 , then Finish [ i ] =false ; otherwise,
Finish [ i ] =true
ii. Find an index i such that both:
(a) Finish [ i ] =¿ false
(b) Request i ≤Work
If no such i exists, go to step 4
iii. Work=Work + Allocation i
Finish [ i ] =true
go to step 2
iv. If Finish [ i ] =¿ false , for some i , 1≤ i ≤ n , then the system is in deadlock state.
Moreover, if Finish [ i ] =¿ false , then Pi is deadlocked
Algorithm requires an order of O ( m× n2 ) operations to detect whether the system is
in deadlocked state
 Example of Detection Algorithm
 Five processes P0 through P4 ; three resource types
A (7 instances), B (2 instances), and C (6 instances)
 Snapshot at time T 0 :
Allocation Request Available
ABC ABC ABC
P0 0 1 0 000 000
P1 2 0 0 202
P2 3 0 3 000
P3 2 1 1 100
P4 0 0 2 002
 Sequence ¿ P 0 , P 2 , P 3 , P 1 , P 4 >¿ will result in Finish [ i ] =true for all i
 P2 requests an additional instance of type C
Request
ABC
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
 State of system?
o Can reclaim resources held by process P0 , but insufficient resources to fulfill
other processes; requests
o Deadlock exists, consisting of processes P1 , P2 , P3 , and P4
 Detection-Algorithm Usage

Kirby510’s Website Blog 88 August 2013


TSN2101 Operating Systems Revision Note

 When, and how often, to invoke depends on:


o How often a deadlock is likely to occur?
o How many processes will need to be rolled back?
 one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many cycles in the
resource graph and so we would not be able to tell which of the many deadlocked
processes “caused” the deadlock

25. Recovery from Deadlock: Process Termination


 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is eliminated
 In which order should we choose to abort?
 Priority of the process
 How long process has computed, and how much longer to completion
 Resources the process has used
 Resources process needs to complete
 How many processes will need to be terminated
 Is process interactive or batch?

26. Recovery from Deadlock: Resource Preemption


 Selecting a victim – minimize cost
 Rollback – return to some safe state, restart process for that state
 Starvation – same process may always be picked as victim, include number of rollback
in cost factor

Kirby510’s Website Blog 89 August 2013


TSN2101 Operating Systems Revision Note

Tutorial 06: Deadlocks


1. What is the optimistic assumption made in the deadlock-detection algorithm? How could this
assumption be violated?

The optimistic assumption is that there will not be any form of circular wait in terms of
resources allocated and processes making requests for them. This assumption could be
violated if a circular wait does indeed occur in practice.

2. Consider the following snapshot of a system:

Process Allocation Max Available


ABCD ABCD ABCD
P0 0012 0012 1520
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656

Answer the following questions using the banker's algorithm:


a) What is the content of the matrix Need?
b) Is the system in a safe state?
c) If a higher priority request made by process P1 for (0,4,2,0) resources, can the request be
granted immediately?

a) Since Need=Max− Allocation , the content of Need is

Process Allocation Max Need


ABCD ABCD ABCD
P0 0012 0012 0000
P1 1000 1750 0750
P2 1354 2356 1002
P3 0632 0652 0020
P4 0014 0656 0642

b) Yes, the sequence ¿ P 0, P 2, P 1, P 3, P 4> ¿ satisfies the safety requirement.

Kirby510’s Website Blog 90 August 2013


TSN2101 Operating Systems Revision Note

c) Yes. Since
i. ( 0,4,2,0 ) ≤ Need i=( 0,7,5,0 )
ii. ( 0,4,2,0 ) ≤ Available=( 1,5,2,0 )
iii. The new system state after the allocation is made is

Process Allocation Max Need Available


ABCD ABCD ABCD ABCD
P0 0012 0012 0000 1100
P1 1000 1750 0750
P2 1354 2356 1002
P3 0632 0652 0020
P4 0014 0656 0642

and the sequence ¿ P 0, P 2, P 1, P 3, P 4> ¿ satisfies the safety requirement.

3. Explain the four conditions that must hold for deadlock to occur.

 Mutual exclusion: Only one process at a time can use a resource. If another
process requests that resource, the requesting process must be delayed until the resource
has been released.
 Hold and wait: A process holding at least one resource is waiting to acquire additional
resources held by other processes.
 No preemption: Resources cannot be preempted; that is a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
 Circular wait: There exists a set { P0 , P 1 , … , P0 } of waiting processes such that P0
is waiting for a resource that is held by P1 , P1 is waiting for a resource that is held
by P2 ,… , Pn−1 is waiting for a resource that is held by Pn , and P0 is waiting
for a resource that is held by P0 .

4. Consider a system consisting of four resources of the same type that are shared by three
processes, each of which needs at most two resources. Show that the system is deadlock-free.

Suppose the system is deadlocked. This implies that each process is holding one resource and
is waiting for one more. Since there are three processes and four resources, one process must
be able to obtain two resources. This process requires no more resources and, therefore it will
return its resources when done.

Kirby510’s Website Blog 91 August 2013


TSN2101 Operating Systems Revision Note

Past Examination Paper Questions

(Trimester 1, 2007/2008)
Question 1
a) What are the three main purposes of an operating system? (3 Marks)

b) Operating system control processes and allocated resources to processes. In order to control
resource and processes, OS needs to know the current status of each of the entity that it is
managing. So the operating system constructs and maintains four types of tables. Explain
each of the tables. (4 Marks)

c) State the advantages and disadvantages of using Virtual Machine. (3 Marks)

d) A computer has a cache, main memory, and a disk used for virtual memory. If a referenced
word is in the cache, 20 ns are required to access it. If it is in main memory but not in the
cache, 60 ns are needed to load it into the cache (this includes the time to originally check the
cache), and then the reference is started again. If the word is not in main memory, 12 ns are
required to fetch the word from disk, followed by 60 ns to copy it to the cache, and then the
reference is started again. The cache hit ratio is 0.9 and the main-memory hit ratio is 0.6.
What is the average time is ns required to access a referenced word on this system?
(5 Marks)

Question 2
a) List and briefly define three techniques for I/O operations. (3 Marks)

b) Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
i) How does the CPU interface with the device to coordinate the transfer? (2 Marks)
ii) The CPU is allowed to execute other programs while the DMA controller is transferring
data. Does this process interfere with the execution of the user programs? If so, describe
what forms of interference are caused. (2 Marks)

c) What are the main advantages of the microkernel approach to system design? (3 Marks)

d) What two advantages do threads have over multiple processes? What major disadvantage do
they have? Thread has two types of scheduling. Distinguish both. (5 Marks)

Kirby510’s Website Blog 92 August 2013


TSN2101 Operating Systems Revision Note

Question 3
a) Describe the actions a kernel takes to context switch between processes. (3 Marks)

b) Consider the following set of processes, with the length of processing and arrival times are
given in milliseconds:
Process Processing time Arrival time
P1 4 0
P2 5 2
P3 4 2
P4 5 6
P5 4 4
ii) Draw the Gantt charts illustrating the execution of these processes using FCFS,
Preemptive SJF and RR (quantum=3) scheduling. (3 Marks)
iii) Calculate the average waiting time for each algorithm and determine which algorithm is
the best and why? (3 Marks)

c) Consider a system with 10 printers, 2 plotters and 2 scanners with 4 processes, A, B, C and D
have the following requirements and allocation for these resources.

ALLOCATION MAXIMUM
Proces Printer Plotter Scanne Process Printer Plotter Scanner
s r
A 4 1 0 A 9 1 1
B 1 0 0 B 4 0 1
C 3 0 1 C 7 1 1
D 2 0 0 D 3 0 1

i) Construct a table for need and available. (2 Marks)


ii) Draw a Resource-Allocation Graph (RAG) for this system. (2 Marks)
iii) By using Banker’s Algorithm, determine whether the system is in a safe state or not.
(2 Marks)

(Trimester 3, 2007/2008)
Question 1
a. Describe two general roles of an operating systems and elaborate why these roles are
important. (4 marks)

b. Multiprogramming and multitasking is the situation where more than one process is
apparently able to execute simultaneously. Describe how this is possible or achievable in a
computer system that has only ONE processor (CPU). (4 marks)

c. What is a process and state any three (3) attributes of a process. (4 marks)

d. State any three (3) relationships between threads and processes. (3 marks)

Kirby510’s Website Blog 93 August 2013


TSN2101 Operating Systems Revision Note

Question 2
a. What is the function of the ready queue? (2 marks)

b. Consider the following scenario where there are 4 processes arriving at different times and
having the CPU burst time as recorded in the table below.

Process ID Arrival Time CPU Burst Time


1 0 8
2 1 3
3 2 1
4 3 4

i. Draw the Gantt chart for the following algorithm


1. First Come First Serve (Non-preemptive) (2 marks)
2. Shortest Job First (Pre-emptive) (2 marks)

ii. Calculate the waiting time and turnaround time for every process under the Shortest Job
First (Pre-emptive) algorithm. (4 marks)

iii. What is the average turnaround time for processes running under the Shortest Job First
(Pre-emptive) algorithm? (1 mark)

c. Describe two differences between the short-term scheduler and the long-term scheduler with
respect to process management in operating systems. (4 marks)

Kirby510’s Website Blog 94 August 2013


TSN2101 Operating Systems Revision Note

Question 3
a. Assuming the operating system detects that the system is deadlocked. What can it (operating
system) do to recover from deadlock? (3 marks)

b. What must Banker’s Algorithm know a priori in order to prevent deadlock? (2 marks)

c. Consider the following situation using the Banker’s Algorithm. Assuming the system has a
total of 12 instances of one device type.

Job No Instances of Device Maximum Required Remaining Needs


Allocated
1 5 6
2 4 7
3 2 6
4 0 2

Determine the remaining needs for each job in the system. (2 marks)

Determine if the system is safe or unsafe. If the system is in a safe state, list the sequence of
requests and releases that will make it possible for all processes to run till completion. If the
system is unsafe, show how it is possible for deadlock to occur. (4 marks)

d. Consider the following directed resource graph:

Is this system deadlocked? Why? (4 marks)

Kirby510’s Website Blog 95 August 2013


TSN2101 Operating Systems Revision Note

(Trimester 3, 2008/2009)
QUESTION 1
a) What are the TWO main functions of an operating system?
(2 marks)

b) What are the key difference between a trap and an interrupt?


(2 marks)

c) For each of the following system calls, give a condition that causes it to fail: fork, exec, and
unlink.
(3 marks)

d) Draw a diagram to indicate the THREE states that a process can be in and the transition
between them. Explain the transitions.
(3 marks)

e) Explain the difference between a monolithic kernel and a microkernel.


(5 marks)

QUESTION 3
a) Define deadlocks. List the strategies for dealing with them.
(5 marks)

b) Consider the following snapshot of a system:

Process Allocation Max Available


ABCD ABCD ABCD
P0 0012 0012 1520
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656

Answer the following questions using the Banker’s Algorithm:

i) What is the content of the matrix Need?


ii) Is the system in a safe state?
iii) If a request from process P1 arrives for ( 0,4,2,0 ) can the request be granted
immediately? Show the system state after the allocation is made and the sequence that
satisfies the safety requirement.
(10 marks)

Kirby510’s Website Blog 96 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 4
a) What are the FOUR conditions needed to avoid race conditions?
(4 marks)

b) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a
non-preemptive priority (a smaller priority number implies a higher priority), and RR
(quantum=1) scheduling.
ii) What is the turnaround time of each process for each of the scheduling algorithms?
iii) What is the waiting time of each process for each of the scheduling algorithms in?
iv) Which of the schedules in (i) results in the minimal average waiting time (over all
processes)?
(9 marks)

c) What is the difference between preemptive and non-preemptive scheduling?


(2 marks)

(Trimester 2, 2009/2010)
QUESTION 1
(a) List five services provided by an operating system that are designed to make it more
convenient for users to use the operating system. In what cases it would be impossible for
user-level programs to provide these services? Explain. [5]

(b) Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
(i) How does the CPU interface with the device to coordinate the transfer?
(ii) The CPU is allowed to execute other programs while the DMA controller is transferring
data. Does this process interfere with the execution of the user programs? If so, describe
what forms of interference are caused. [4]

(c) What are the benefits of the microkernel OS design? [2]

(d) How is the dual-mode implemented in a dual-mode operating system? [1]

Kirby510’s Website Blog 97 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 2
(a) What are the benefits of multithreading? [3]

(b) Explain the context switching operation. [3]

(c) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:

Process Burst-time Priority


P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, and P5 all at time 0.
i) Draw the Gantt charts illustrating the execution of these processes using SJF, and non-
preemptive priority scheduling (a smaller number implies high priority). [2]
ii) What is the turnaround time of each process for each of the scheduling algorithms in part
I? [2]
iii) What is the waiting time of each process for each of the scheduling algorithms in part I?
[2]

(Trimester 1, 2010/2011)
QUESTION 1:
a. (i) Name any FOUR resources of a computer?
(ii) Why an Operating system is called a resource allocator?
(2 marks)

b. List any SIX services provided by an operating system. (3 marks)

c. State the reason for considering direct memory access (DMA) as an efficient mechanism for
performing I/O data transfer. (2 marks)

d. Describe briefly THREE general methods used to pass parameters to the operating system
during system calls. (3 marks)

Kirby510’s Website Blog 98 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 2:
a. Name and briefly describe the different states that a process can exist in at any given time.
(4 marks)

b. State the THREE conditions that must be satisfied in order to solve the critical section
problem? (3 marks)

c. Write the short methods that implement the simple semaphore operations wait() and
signal() on global variable S. (3 marks)

QUESTION 3:
a. State the difference between preemptive and non-preemptive scheduling?
(2 marks)

b. Consider the following set of processes, with the length of the CPU-burst and arrival time
given in milliseconds:

Process Burst-time Arrival time


P1 10 0
P2 2 1
P3 6 6
P4 3 4
P5 8 3

Answer the following with respect to Pre-emptive Shortest Job First (SJF) and Round
Robin (RR) (time quantum = 3 milliseconds) scheduling algorithms.

(i) Draw the Gantt charts illustrating the execution of the given processes for each of the
above scheduling algorithms. (3+3=6 marks)

(ii) Identity the turnaround time and waiting time of process P5 for each of the above
scheduling algorithms. (1+1=2 marks)

Kirby510’s Website Blog 99 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 4:
a. List the FOUR conditions that must hold for deadlock to occur. (2 marks)

b. Consider the following system with FIVE processes and FOUR resource types.

 Resource A has 8 instances


 Resource B has 8 instances
 Resource C has 4 instances
 Resource D has 6 instances

The snapshot of the system is given as:

Allocation Max
Process
A B C D A B C D
P0 1 1 1 1 6 2 1 2
P1 1 1 0 0 1 7 4 0
P2 1 1 1 1 2 4 1 4
P3 1 2 2 1 6 6 4 2
P4 1 1 0 1 4 2 0 3

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available? (1 mark)

(ii) What is the content of the matrix Need? (2 marks)

(iii) Find a sequence of processes that will get the system into a safe state.
(5 marks)

(Trimester 3, 2010/2011)
QUESTION 2:
a. What are the THREE issues associated with process creation in an operating system? List
down the possibilities of each issue. (5 marks)

b. State and briefly the TWO general approaches for thread cancellation. (2 marks)

c. Write a short program that illustrates the use of the semaphore mutex to provide mutual
exclusion for a critical section. Assume mutex is initialized to the value 1. (3 marks)

Kirby510’s Website Blog 100 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3:
a. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound
programs? (3 marks)

b. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds:

Process Burst-time Arrival time


P1 8 0
P2 3 2
P3 4 5
P4 4 6
P5 10 3

(i) Draw the Gantt charts illustrating the execution of these processes using Pre-emptive
Shortest Job First (SJF) and Round Robin (RR) (quantum=4) scheduling. (5 marks)

(ii) What is the turnaround time of P5 for each of the given scheduling algorithms (SJF
and RR)? (1 mark)

(iii) What is the waiting time of P3 for each of the given scheduling algorithms (SJF and
RR)? (1 mark)

Kirby510’s Website Blog 101 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 4:
a. Draw an example of a deadlock system using a resource allocation graph involving
processes P1 , P2 and P3 . (2 marks)

b. Consider the following system with FIVE processes and FOUR resource types.

 A has 9 instances
 B has 7 instances
 C has 4 instances
 D has 5 instances

The snapshot of the system is given as:

Proces Allocation Max


s ABCD ABCD
P0 1111 5212
P1 1100 1514
P2 1111 2413
P3 1221 6735
P4 1101 4213

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available? (1 mark)

(ii) What is the content of the matrix Need? (2 marks)

(iii) Find a sequence of processes that will get the system into a safe state. Write the steps in
detail. (5 marks)

(Trimester 1, 2011/2012)
QUESTION 1:
a. How do asymmetric and symmetric clustering methods provide high availability service?
(2 marks)

b. Draw the diagram of MS-DOS layer structure and state any ONE weakness that can be found
in the MS-DOS layer structure. (3 marks)

c. Draw the process state diagram stating the five different states that a process can be in.
(2 marks)

d. Provide any TWO reasons as to why partially executed processes can be swapped out and
later swapped in by the medium-term scheduler. (2 marks)

Kirby510’s Website Blog 102 August 2013


TSN2101 Operating Systems Revision Note

e. State and briefly explain any THREE benefits of multithreaded programming. (3 marks)

QUESTION 2:
a. The CPU scheduler selects a process from the processes in memory that are ready to execute
and allocates the CPU to that process, based on a certain scheduling algorithm. State the
FIVE optimization criteria that can be used for comparing the different CPU scheduling
algorithms. (2.5 marks)

b. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds:

Process Burst-time Arrival time


P1 9 0
P2 3 1
P3 6 6
P4 4 4
P5 5 3

Answer the following with respect to the Pre-emptive Shortest Job First and Round Robin
(time quantum = 4 milliseconds) scheduling algorithms.

(i) Draw the Gantt charts illustrating the execution of the given processes for each of the
above scheduling algorithms. (3+4.5=7.5 marks)

(ii) Identify the turnaround time of process P5 and waiting time of process P3 for each
of the above scheduling algorithms. (2 marks)

Kirby510’s Website Blog 103 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3:
a. Consider the use of locks as a solution to the critical section problem. Provided a pseudocode
to demonstrate how this is done. (2 marks)

b. (i) State any TWO basic operations that can be performed on a counting semaphore.
(1 mark)

(ii) State the difference between binary and counting semaphores. (1 mark)

c. Consider the following system with FIVE processes and FOUR resource types:

Resource Type Number of Instances


A 7
B 5
C 4
D 6

Consider the following snapshot of the system:

Allocation Max
Process
A B C D A B C D
P0 1 0 1 0 6 4 3 4
P1 1 1 0 0 5 2 2 4
P2 1 2 0 1 4 4 2 4
P3 1 1 2 1 3 2 2 2
P4 1 0 1 1 7 4 4 6

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available? (0.5 marks)

(ii) What is the content of the matrix Need? (2.5 marks)

(iii) Find a sequence of processes that will get the system into a safe state. Show all the steps.
(5 marks)

Kirby510’s Website Blog 104 August 2013


TSN2101 Operating Systems Revision Note

(Trimester 1 Supplementary, 2011/2012)


QUESTION 1:
a. State any TWO advantages and TWO disadvantages of open-source operating systems.
(4 marks)

b. (i) State what do you mean by application programming interface (API).


(ii) Name any TWO common APIs available to the application programmers.
(2+2=4 marks)

c. State any TWO differences between short-term and long-term schedulers.


(4 marks)

QUESTION 2:
a. The CPU scheduler selects a process from the processes in memory that are ready to execute,
and allocates the CPU to that process based on a certain scheduling algorithm. State the
FOUR circumstances under which a CPU scheduling decision is required. (2 marks)

b. Dispatcher is the module involved in the CPU-scheduling function. State any TWO function
of the dispatcher. (2 marks)

c. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds.

Process Burst-time Arrival time


P1 9 0
P2 6 3
P3 2 2
P4 2 6
P5 4 5

(i) Draw the Gantt chart illustrating the execution of the above processes using Pre-
emptive Shortest Job First scheduling algorithm and calculate the turnaround time for
the processes P1 and P5 . (4 marks)

(ii) Draw the Gantt chart illustrating the execution of the above processes using Round
Robin (quantum = 3) scheduling algorithm and calculate the waiting time for the
processes P2 and P3 . (4 marks)

Kirby510’s Website Blog 105 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3:
a. Consider the use of semaphores as a solution to a critical section problem by providing
mutual exclusion. Provide the pseudocode to demonstrate how this is done with the wait and
signal methods. (2 marks)

b. State what you mean by RACE condition as applied to process synchronization. (2 marks)

c. Consider the following system with FIVE processes ( P0 , P1 , P2 , P3 , P4 )


and FOUR resource types (A, B, C, D).

 A has 8 instances
 B has 8 instances
 C has 8 instances
 D has 10 instances

The snapshot of the system is given as below:

Proces Allocation Max


s ABCD ABCD
P0 0160 2461
P1 0206 1366
P2 1011 8826
P3 1200 2203
P4 5210 7746

Answer the following questions using the Banker’s Algorithm for deadlock avoidance:

(i) What is the content of the vector Available? (0.5 mark)

(ii) What is the content of the matrix Need? (2.5 marks)

(iii) Identify a sequence of processes that will get the system into a safe state. Show all the
steps. (5 marks)

Kirby510’s Website Blog 106 August 2013


TSN2101 Operating Systems Revision Note

(Trimester 3, 2011/2012)
QUESTION 1
a) What are the three main purposes of an operating system?
[3 marks]

b) Describe three general methods for passing parameters to the operating system.
[3 marks]

c) List and explain any FIVE elements of the process control block (PCB).
[5 marks]

d) Describe the actions taken by a kernel to context-switch between processes.


[4 marks]

QUESTION 2
a. What three conditions must be satisfied in order to solve the critical section problem?
[3 marks]

b. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:

Process Burst-time Arrival time


P1 9 0
P2 3 2
P3 6 7
P4 4 5
P5 5 4

i) Draw the Gantt charts illustrating the execution of these processes using First-Come-
First-Served (FCFS), Pre-emptive Shortest Job First (SJF) and Round Robin (RR)
(quantum = 4) scheduling.
[9 marks]

ii) What is the turnaround time of P5, for each of the scheduling algorithms?
[1.5 marks]

iii) What is the waiting time of P5, for each of the scheduling algorithms?
[1.5 marks]

Kirby510’s Website Blog 107 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3
a. The following program segment is used to manage a finite number of instances of an
available resource. The maximum number of resources and the number of available resources
are declared as follows:

#define MAX_RESOURCES 5
int available resources = MAX_RESOURCES;

When a process wishes to obtain a number of resources, it invokes the


decrease()function:

/* decrease available_resources by count resources */


/* return 0 if sufficient resources available, */
/* otherwise return -1 */

int decrease(int count) {


if (available_resources < count)
return -1;
else {
available resources -= count;
return 0;
}
}

When a process wants to retum a number of resources, it calls the increase() function:

/* increase available_resources by count */


int increase(int count) {
available resources += count;
return 0;
}

The preceding program segment produces a race condition. Do the following:

i) Identify the data involved in the race condition.


[1 mark]
ii) Identify the location (or locations) in the code where the race condition occurs.
[2 marks]
iii) Using a semaphore, fix the race condition.
[3 marks]

Kirby510’s Website Blog 108 August 2013


TSN2101 Operating Systems Revision Note

b. Consider the following system with FIVE processes and FOUR resource types:

 A has 8 instances
 B has 8 instances
 C has 8 instances
 D has 8 instances

The snapshot of the system is given as:


Proces Allocation Max
s ABCD ABCD
P0 0160 2471
P1 0206 3688
P2 1011 2112
P3 1200 3311
P4 5210 7746

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available?


[1.5 marks]

(ii) What is the content of the matrix Need?


[2.5 marks]

(iii) Find a sequence of processes that will get the system into a safe state, if there is any.
[5 marks]

(Trimester 1, 2012/2013)
QUESTION 1:
a. Briefly explain the relationship between an application programming interface (API),
system-call interface, and system calls. (3 marks)

b. State any TWO advantages and ONE disadvantage of using a microkernel approach.
(3 marks)

c. List and briefly explain the FOUR major categories of the benefits of multithreaded
programming. (4 marks)

d. State any TWO reasons for considering direct memory access (DMA) as an efficient
mechanism for performing I/O. (2 marks)

Kirby510’s Website Blog 109 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 2:
a. Briefly explain the concept of a context switch with respect to process scheduling. (2
marks)

b. State any TWO situations in which a parent process may terminate its children processes.
(2 marks)

c. Consider the following set of processes:

Process Burst-time Arrival time


P1 4
P2 0
P3 7
P4 9
P5 3

(i) Complete the Burst-time column in the table given above, based on the following Gantt
chart, which is given for First-Come-First-Served (FCFS) CPU scheduling algorithm.
(1 mark)

(ii) Draw the Gantt charts illustrating the execution of these processes using Pre-emptive
Shortest Job First (SJF) and Round Robin (RR) (quantum=4) CPU scheduling.
(5 marks)

(iii) What is the turnaround time of the process of P3 , and waiting time of the process
P4 for the SJF and RR algorithms? (2 marks)

Kirby510’s Website Blog 110 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3:
a. Briefly describe the dining-philosophers problem and how it relates to process
synchronization problem in operating systems. (3 marks)

b. State the difference between deadlock prevention and deadlock avoidance methods.
(2 marks)

c. Consider the following system with FIVE processes and FOUR resource types:

Resource Type Number of Instances


A 10
B 10
C 10
D 10

The snapshot of the system is given below:

Proces Allocation Max


s ABCD ABCD
P0 2421 3591
P1 2111 5411
P2 1131 1632
P3 3213 4313
P4 1123 6128

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available? (1 mark)


(ii) What is the content of the matrix Need? (1 mark)
(iii) Find a sequence of processes that will get the system into a safe state, if there is any.
Show all the required steps. (5 marks)

Kirby510’s Website Blog 111 August 2013


TSN2101 Operating Systems Revision Note

(Trimester 1 Supplementary, 2012/2013)


QUESTION 1:
a. (i) State what is known as ‘booting’ the computer system.
(ii) Identify the function of the bootstrap program.
(iii) Where the bootstrap program is normally stored?
(3 × 1=3 marks)

b. State the role played by the following in a computer system.


(i) device controllers
(ii) device drivers
(2 × 1=2 marks)

c. State any THREE advantages of a modular design approach compared to a layered or


microkernel design approach.
(3 marks)

d. There are TWO different ways that commands can be processed by a command interpreter.

One approach is to allow the command interpreter to contain the code needed to execute the
command. The other approach is to implement the commands through system programs.

State any ONE advantage and any ONE disadvantage of using each of the approaches.
(2 × 2=4 marks)

Kirby510’s Website Blog 112 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 2:
a. State the role played by dispatcher module in CPU Scheduling. State the THREE tasks
involved in performing that role. (2 marks)

b. State the effect of the following on the performance of a Round Robin (RR) scheduling
algorithm: (2 marks)
(i) Time quantum size that is too long.
(ii) Time quantum size that is too short.

c. Consider the following set of the processes:

Process Burst-time Arrival time


P1 6 5
P2 6 2
P3 9 0
P4 5 8
P5 3 4

(i) Draw the Gantt charts illustrating the execution of these processes using the following
CPU scheduling algorithms. (3 × 2=6 marks)
 First-Come-First-Served (FCFS)
 Pre-emptive Shortest Job First (SJF)
 Round Robin (RR) (quantum = 4)

(ii) What is the turnaround time of P5 , for the SJF and RR algorithms? (1 mark)

(iii) What is the waiting time of P3 , for the SJF and RR algorithms?
(1 mark)

Kirby510’s Website Blog 113 August 2013


TSN2101 Operating Systems Revision Note

QUESTION 3:
a. (i) What you mean by the term, busy waiting, with respect to process synchronization?

(ii) State how the semaphore operations can be modified to overcome the disadvantage of
busy waiting? (1+2=3 marks)

b. (i) What does a claim edge signify in a resource-allocation graph?

(ii) How will you represent it in the graph? (1+1=2 marks)

c. Consider the following system with FIVE processes ( P0 , P1 , P2 , P3 , and P4


) and FOUR resource types (A, B, C, and D).

 A has 10 instances
 B has 10 instances
 C has 10 instances
 D has 10 instances

The snapshot of the system is given below:

Proces Allocation Max


s ABCD ABCD
P0 0122 5153
P1 2222 8798
P2 2122 3232
P3 2222 2442
P4 2312 9532

Answer the following questions using the Banker’s Algorithm:

(i) What is the content of the vector Available? (1 mark)

(ii) What is the content of the matrix Need? (1 marks)

(iii) Find a sequence of processes that will get the system into a safe state, if there is any.
Show all the required steps. (5 marks)

Kirby510’s Website Blog 114 August 2013

You might also like