Professional Documents
Culture Documents
OS - Lecture Notes PDF
OS - Lecture Notes PDF
ON
OPERATING SYSTEMS
Mrs. N.HEMALATHA
ASST.PROFESSOR
Course Objective:
• To make the students understand the basic operating system concepts such as processes, threads,
scheduling, synchronization, deadlocks, memory management, file and I/O subsystems and protection.
• To get acquaintance with the class of abstractions afford by general purpose operating systems that aid
the development of user applications.
Learning Outcome:
UNIT I
Operating Systems Overview: Operating system functions, Operating system structure, operating
systems Operations, protection and security, Kernel data Structures, Computing Environments, Open-
Source Operating Systems
Operating System Structure: Operating System Services, User and Operating-System Interface,
systems calls, Types of System Calls, system programs, operating system structure, operating system
debugging, System Boot.
Processes: Process concept, process Scheduling, Operations on processes, Inter process
Communication, Examples of IPC systems.
UNIT II
Threads: overview, Multicore Programming, Multithreading Models, Thread Libraries, Implicit
threading, Threading Issues.
Process Synchronization: The critical-section problem, Peterson‘s Solution, Synchronization
Hardware, Mutex Locks, Semaphores, Classic problems of synchronization, Monitors,
Synchronization examples, Alternative approaches.
CPU Scheduling: Scheduling-Criteria, Scheduling Algorithms, Thread Scheduling, Multiple-
Processor Scheduling, Real-Time CPU Scheduling, Algorithm Evaluation.
UNIT III
Memory Management: Swapping, contiguous memory allocation, segmentation, paging, structure of
the page table.
Virtual memory: demand paging, page-replacement, Allocation of frames, Thrashing, Memory-
Mapped Files, Allocating Kernel Memory
Deadlocks: System Model, deadlock characterization, Methods of handling Deadlocks, Deadlock
prevention, Detection and Avoidance, Recovery from deadlock.
UNIT IV
Mass-storage structure: Overview of Mass-storage structure, Disk structure, Disk attachment, Disk
scheduling, Swap-space management, RAID structure, Stable-storage implementation.
File system Interface: The concept of a file, Access Methods, Directory and Disk structure, File
system mounting, File sharing, Protection.
File system Implementation: File-system structure, File-system Implementation, Directory
Implementation, Allocation Methods, Free-Space management.
UNIT V
I/O systems: I/O Hardware, Application I/O interface, Kernel I/O subsystem, Transforming I/O
requests to Hardware operations.
Protection: Goals of Protection, Principles of Protection, Domain of protection, Access Matrix,
Implementation of Access Matrix, Access control, Revocation of Access Rights, Capability- Based
systems, Language – Based Protection
Security: The Security problem, Program threats, System and Network threats, Cryptography as a
security tool, User authentication, Implementing security defenses, Firewalling to protect systems and
networks, Computer–security classifications.
Text Books:
1. Operating System Concepts, Abraham Silberchatz, Peter B. Galvin, Greg Gagne, Ninth Edition,
2012, Wiley.
2. Operating Systems: Internals and Design Principles, Stallings, Sixth Edition, 2009, Pearson
Education.
Reference Books:
1. Modern Operating Systems, Andrew S Tanenbaum, Second Edition, PHI.
2. Operating Systems, S.Haldar, A.A.Aravind, Pearson Education.
3. Principles of Operating Systems, B.L.Stuart, Cengage learning, India Edition.
4. Operating Systems, A.S.Godbole, Second Edition, TMH.
5. An Introduction to Operating Systems, P.C.P. Bhatt, PHI.
6. Operating Systems, G.Nutt, N.Chaki and S.Neogy, Third Edition, Pearson Education.
7. Operating Systems, R.Elmasri, A,G.Carrick and D.Levine, Mc Graw Hill.
UNIT-1
Operating System Overview
Operating System Definition:
OS is a resource allocator
∑ Manages all resources
∑ Decides between conflicting requests for efficient and fair resource use
OS is a control program
∑ Controls execution of programs to prevent errors and improper use of the
computer
Operating System Structure:
Multiprogramming needed for efficiency
∑ Single user cannot keep CPU and I/O devices busy at all times
∑ Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
∑ A subset of total jobs in system is kept in memory
∑ One job selected and run via job scheduling
∑ When it has to wait (for I/O for example), OS switches to another job
Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that
users can interact with each job while it is running, creating interactive computing
∑ Response time should be < 1 second
∑ Each user has at least one program executing in memory [process
∑ If several jobs ready to run at the same time [CPU scheduling
∑ If processes don’t fit in memory, swapping moves them in and out to run
∑ Virtual memory allows execution of processes not completely in memory
Memory Layout for Multi programmed System
Computing Environments:
Client-Server Computing
o Dumb terminals supplanted by smart PCs
o Many systems now servers, responding to requests generated by clients
4 Compute-server provides an interface to client to request services (i.e.,
database)
4 File-server provides interface for clients to store and retrieve files
Web-Based Computing
∑ Web has become ubiquitous
∑ PCs most prevalent devices
∑ More devices becoming networked to allow web access
∑ New category of devices to manage web traffic among similar servers: load balancers
∑ Use of operating systems like Windows 95, client-side, have evolved into Linux and
Windows XP, which can be clients and servers
Open-Source Operating Systems:
∑ Operating systems made available in source-code format rather than just binary closed-
source
∑ Counter to the copy protectionand Digital Rights Management (DRM)movement
∑ Started by Free Software Foundation (FSF), which has “copyleft” GNU Public
License (GPL)
System Calls:
∑ Programming interface to the services provided by the OS
∑ Typically written in a high-level language (C or C++)
∑ Mostly accessed by programs via a high-level Application Program Interface
(API)rather than direct system call use
∑ Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API
for the Java virtual machine (JVM)
∑ Why use APIs rather than system calls?
CREC, Dept. of CSE Page 6
(Note that the system-call names used throughout this text are generic)
∑ Status information
CREC, Dept. of CSE Page 8
o Some ask the system for info - date, time, amount of available memory, disk
space, number of users
o Others provide detailed performance, logging, and debugging information
o Typically, these programs format and print the output to the terminal or other
output devices
o Some systems implement a registry - used to store and retrieve configuration
information
∑ File modification
o Text editors to create and modify files
o Special commands to search contents of files or perform transformations of the
text
o Programming-language support - Compilers, assemblers, debuggers and
interpreters sometimes provided
∑ Program loading and execution- Absolute loaders, relocatable loaders, linkage editors,
and overlay-loaders, debugging systems for higher-level and machine language
∑ Communications - Provide the mechanism for creating virtual connections among
processes, users, and computer systems
o Allow users to send messages to one another’s screens, browse web pages, send
electronic-mail messages, log in remotely, transfer files from one machine to
another
Operating-System Debugging:
∑ Debuggingis finding and fixing errors, or bugs
∑ OSes generate log filescontaining error information
∑ Failure of an application can generate core dumpfile capturing memory of the process
∑ Operating system failure can generate crash dumpfile containing kernel memory
∑ Beyond crashes, performance tuning can optimize system performance
∑ Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by definition, not smart
enough to debug it.”
∑ DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production
systems
o Probes fire when code is executed, capturing state data and sending it to
consumers of those probes
Operating System Generation:
CREC, Dept. of CSE Page 9
∑ Operating systems are designed to run on any of a class of machines; the system must be
configured for each specific computer site
∑ SYSGEN program obtains information concerning the specific configuration of the
hardware system
∑ Booting – starting a computer by loading the kernel
∑ Bootstrap program – code stored in ROM that is able to locate the kernel, load it into
memory, and start its execution
System Boot
∑ Operating system must be made available to hardware so hardware can start it
o Small piece of code – bootstrap loader, locates the kernel, loads it into memory,
and starts it
o Sometimes two-step process where boot block at fixed location loads bootstrap
loader
o When power initialized on system, execution starts at a fixed memory location
4 Firmware used to hold initial boot code
Process Concept:
∑ An operating system executes a variety of programs:
o Batch system – jobs
o Time-shared systems – user programs or tasks
o Textbook uses the terms job and process almost interchangeably
∑ Process – a program in execution; process execution must progress in sequential fashion
∑ A process includes:
o program counter
o stack
o data section
The Process:
∑ Multiple parts
o The program code, also called text section
o Current activity including program counter, processor registers
o Stack containing temporary data
Process State:
∑ As a process executes, it changes state
o new: The process is being created
o running: Instructions are being executed
o waiting: The process is waiting for some event to occur
o ready: The process is waiting to be assigned to a processor
o terminated: The process has finished execution
Process Scheduling:
∑ Maximize CPU use, quickly switch processes onto CPU for time sharing
∑ Process scheduler selects among available processes for next execution on CPU
∑ Maintains scheduling queues of processes
Schedulers:
∑ Long-term scheduler(or job scheduler) – selects which processes should be brought into
the ready queue
∑ Short-term scheduler(or CPU scheduler) – selects which process should be executed
next and allocates CPU
o Sometimes the only scheduler in a system
∑ Short-term scheduler is invoked very frequently (milliseconds) (must be fast)
∑ Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)
∑ The long-term scheduler controls the degree of multiprogramming
∑ Processes can be described as either:
o I/O-bound process– spends more time doing I/O than computations, many short
CPU bursts
o CPU-bound process– spends more time doing computations; few very long CPU
bursts
User Threads:
∑ Thread management done by user-level threads library
∑ Three primary thread libraries:
o POSIX Pthreads
o Win32 threads
o Java threads
Kernel Threads:
∑ Supported by the Kernel
∑ Examples
o Windows XP/2000
o Solaris
o Linux
o Tru64 UNIX
o Mac OS X
Multithreading Models:
∑ Many-to-One
∑ One-to-One
∑ Many-to-Many
Many-to-One
∑ Many user-level threads mapped to single kernel thread
One-to-One:
∑ Each user-level thread maps to kernel thread
∑ Examples
o Windows NT/XP/2000
o Linux
o Solaris 9 and later
Many-to-Many Model:
∑ Allows many user level threads to be mapped to many kernel threads
∑ Allows the operating system to create a sufficient number of kernel threads
∑ Solaris prior to version 9
Thread Libraries:
∑ Thread library provides programmer with API for creating and managing threads
∑ Two primary ways of implementing
o Library entirely in user space
o Kernel-level library supported by the OS
Pthreads
∑ May be provided either as user-level or kernel-level
∑ A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
∑ API specifies behavior of the thread library, implementation is up to development of the
library
∑ Common in UNIX operating systems (Solaris, Linux, Mac OS X)
Threading Issues:
∑ Semantics of fork() and exec() system calls
∑ Thread pools
∑ Thread-specific data
n Create Facility needed for data private to thread
∑ Scheduler activations
Thread Cancellation:
∑ Terminating a thread before it has finished
∑ Two general approaches:
o Asynchronous cancellation terminates the target thread immediately.
o Deferred cancellation allows the target thread to periodically check if it should be
cancelled.
Thread Pools:
∑ Create a number of threads in a pool where they await work
∑ Advantages:
o Usually slightly faster to service a request with an existing thread than create a
new thread
o Allows the number of threads in the application(s) to be bound to the size of the
pool
Scheduler Activations:
∑ Both M:M and Two-level models require communication to maintain the appropriate
number of kernel threads allocated to the application
∑ Scheduler activations provide upcalls - a communication mechanism from the kernel to
the thread library
∑ This communication allows an application to maintain the correct number kernel threads
Lightweight Processes
// writing is performed
signal (wrt) ;
} while (TRUE);
∑ The structure of a reader process
do {
Dining-Philosophers Problem
P1 P2 P
0 24 27 30
P2 P3 P1
0 3 6 30
Contiguous Allocation
∑ Main memory usually into two partitions:
o Resident operating system, usually held in low memory with interrupt vector
o User processes then held in high memory
o Each process contained in single contiguous section of memory
∑ Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
o Base register contains value of smallest physical address
o Limit register contains range of logical addresses – each logical address must be
less than the limit register
o MMU maps logical address dynamically
∑ Multiple-partition allocation
o Degree of multiprogramming limited by number of partitions
o Hole – block of available memory; holes of various size are scattered throughout
memory
o When a process arrives, it is allocated memory from a hole large enough to
accommodate it
o Process exiting frees its partition, adjacent free partitions combined
o Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Dynamic Storage-Allocation Problem
∑ First-fit: Allocate the first hole that is big enough
∑ Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size
o Produces the smallest leftover hole
∑ Worst-fit: Allocate the largest hole; must also search entire list
o Produces the largest leftover hole
Fragmentation
∑ External Fragmentation – total memory space exists to satisfy a request, but it is not
contiguous
∑ Internal Fragmentation – allocated memory may be slightly larger than requested
memory; this size difference is memory internal to a partition, but not being used
Free Frames
Segmentation Architecture
∑ Logical address consists of a two tuple:
<segment-number, offset>,
∑ Segment table – maps two-dimensional physical addresses; each table entry has:
o base – contains the starting physical address where the segments reside in
memory
o limit – specifies the length of the segment
o Segment-table base register (STBR) points to the segment table’s location in
memory
∑ Segment-table length register (STLR) indicates number of segments used by a
program;
segment number s is legal if s<STLR
Page Fault
∑ If there is a reference to a page, first reference to that page will trap to operating system:
No of page faults: 9
Least Recently Used (LRU) Algorithm:
∑ Use past knowledge rather than future
∑ Replace page that has not been used in the most amount of time
∑ Associate time of last use with each page
Page faults:12
∑ Reference bit
∑ Second-chance algorithm
o Generally FIFO, plus hardware-provided reference bit
o Clock replacement
o If page to be replaced has
4 Reference bit = 0 -> replace it
4 reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page, subject to same rules
Counting Algorithms
Slab Allocator
∑ Alternate strategy
∑ Slab is one or more physically contiguous pages
∑ Cache consists of one or more slabs
∑ Single cache for each unique kernel data structure
o Each cache filled with objects – instantiations of the data structure
o When cache created, filled with objects marked as free
∑ When structures stored, objects marked as used
∑ If slab is full of used objects, next object allocated from empty slab
o If no empty slabs, new slab allocated
o Benefits include no fragmentation, fast memory request satisfaction
Avoidance algorithms
∑ Single instance of a resource type
o Use a resource-allocation graph
∑ Multiple instances of a resource type
o Use the banker’s algorithm
Resource-Allocation Graph Scheme
∑ Claim edgePiÆRj indicated that process Pj may request resource Rj; represented by a
dashed line
∑ Claim edge converts to request edge when a process requests a resource
∑ Request edge converted to an assignment edge when the resource is allocated to the
process
∑ When a resource is released by a process, assignment edge reconverts to a claim edge
∑ Resources must be claimed a priori in the system
Banker’s Algorithm
∑ Multiple instances
∑ Each process must a priori claim maximum use
∑ When a process requests a resource it may have to wait
∑ When a process gets all its resources it must return them in a finite amount of time
Let n = number of processes, and m = number of resources types.
∑ Available: Vector of length m. If available [j] = k, there are k instances of resource type
Rjavailable
∑ Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k instances of
resource type Rj
∑ Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances
of Rj
∑ Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rjto complete
its task
Disk Scheduling
∑ The operating system is responsible for using hardware efficiently — for the disk drives,
this means having a fast access time and disk bandwidth
∑ Minimize seek time
∑ Seek time ª seek distance
∑ Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer
∑ There are many sources of disk I/O request
∑ OS
∑ System processes
∑ Users processes
∑ I/O request includes input or output mode, disk address, memory address, number of
sectors to transfer
∑ OS maintains queue of requests, per disk or device
∑ Idle disk can immediately work on I/O request, busy disk means work must queue
SSTF
∑ Shortest Seek Time First selects the request with the minimum seek time from the current
head position
∑ SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests
∑ Illustration shows total head movement of 236 cylinders
C-SCAN
C-LOOK
∑ LOOK a version of SCAN, C-LOOK a version of C-SCAN
∑ Arm only goes as far as the last request in each direction, then reverses direction
immediately, without first going all the way to the end of the disk
∑ Create
∑ Write
∑ Read
∑ Reposition within file
∑ Delete
∑ Truncate
∑ Open(Fi) – search the directory structure on disk for entry Fi, and move the content of
entry to memory
∑ Close (Fi) – move the content of entry Fi in memory to directory structure on disk
File Types – Name, Extension
∑ Direct Access
readn
writen
position to n
read next
write next
rewriten
n = relative block number
Sequential-access File
Two-Level Directory
∑ Separate directory for each user
∑ Efficient searching
∑ Grouping Capability
∑ Current directory (working directory)
o cd /spell/mail/prog
o type list
File Sharing
∑ Sharing of files on multi-user systems is desirable
∑ Sharing may be done through a protection scheme
∑ On distributed systems, files may be shared across a network
∑ Network File System (NFS) is a common distributed file-sharing method
File Sharing – Multiple Users
∑ User IDs identify users, allowing permissions and protections to be per-user
∑ Group IDs allow users to be in groups, permitting group access rights
File-System Implementation
∑ We have system calls at the API level, but how do we implement their functions?
o On-disk and in-memory structures
∑ Boot control block contains info needed by system to boot OS from that volume
o Needed if volume contains OS, usually first block of volume
∑ Volume control block (superblock, master file table) contains volume details
o Total # of blocks, # of free blocks, block size, free block pointers or array
∑ Directory structure organizes the files
∑ The following figure illustrates the necessary file system structures provided by the
operating systems
∑ Figure 12-3(a) refers to opening a file
∑ Figure 12-3(b) refers to reading a file
∑ Plus buffers hold data blocks from secondary storage
∑ Open returns a file handle for subsequent use
∑ Data from read eventually copied to specified user process memory address
∑ VFS allows the same system call interface (the API) to be used for different types of file
systems
o Separates file-system generic operations from implementation details
o Implementation can be one of many file systems types, or network file system
4 Implements vnodes which hold inodes or network file details
o Then dispatches operation to appropriate file system implementation routines
∑ The API is to the VFS interface, rather than any specific type of file system
Indexed
∑ Indexed allocation
o Each file has its own index block(s) of pointers to its data blocks
Free-Space Management
∑ File system maintains free-space list to track available blocks/clusters
∑ Linked list (free list)
o Cannot get contiguous space easily
o No waste of space
o No need to traverse the entire list (if # free blocks recorded)
Counting
∑ Because space is frequently contiguously used and freed, with contiguous-allocation
allocation, extents, or clustering.
∑ Keep address of first free block and count of following free blocks.
∑ Free space list then has entries containing addresses and counts.
Network Devices
∑ Varying enough from block and character to have own interface
∑ Unix and Windows NT/9x/2000 include socket interface
o Separates network protocol from network operation
o Includes select() functionality
∑ Approaches vary widely (pipes, FIFOs, streams, queues, mailboxes)
Clocks and Timers
∑ Provide current time, elapsed time, timer
∑ Normal resolution about 1/60 second
Error Handling
∑ OS can recover from disk read, device unavailable, transient write failures
o Retry a read or write, for example
o Some systems more advanced – Solaris FMA, AIX
4 Track error frequencies, stop using device with increasing frequency of
retry-able errors
∑ Most return an error number or code when I/O request fails
∑ System error logs hold problem reports
I/O Protection
∑ User process may accidentally or purposefully attempt to disrupt normal operation via
illegal I/O instructions
o All I/O instructions defined to be privileged
o I/O must be performed via system calls
ÿ Memory-mapped and I/O port memory locations must be protected
Use of a System Call to Perform I/O
∑ Option 4 – Lock-key
o Compromise between access lists and capability lists
o Each object has list of unique bit patterns, called locks
Access Control
∑ Protection can be applied to non-file resources
∑ Solaris 10 provides role-based access control (RBAC) to implement least privilege
o Privilege is right to execute system call or use an option within a system call
o Can be assigned to processes
o Users assigned roles granting access to privileges and programs
4 Enable role via password to gain its privileges
o Similar to access matrix
Security
The Security Problem:
∑ System secure if resources used and accessed as intended under all circumstances
l Unachievable
∑ Intruders (crackers) attempt to breach security
∑ Threat is potential security violation
∑ Attack is attempt to breach security
∑ Attack can be accidental or malicious
∑ Easier to protect against accidental than malicious misuse
Security Violation Categories
∑ Breach of confidentiality
o Unauthorized reading of data
∑ Breach of integrity
o Unauthorized modification of data
∑ Breach of availability
o Unauthorized destruction of data
∑ Man-in-the-middle attack
o Intruder sits in data flow, masquerading as sender to receiver and vice versa
∑ Session hijacking
o Intercept an already-established session to bypass authentication
∑ Trojan Horse
o Code segment that misuses its environment
o Exploits mechanisms for allowing programs written by users to be executed by
other users
o Spyware, pop-up browser windows, covert channels
o Up to 80% of spam delivered by spyware-infected systems
∑ Trap Door
o Specific user identifier or password that circumvents normal security procedures
o Could be included in a compiler
∑ Logic Bomb
o Program that initiates a security incident under certain circumstances
∑ Stack and Buffer Overflow
o Exploits a bug in a program (overflow either the stack or memory buffers)
o Failure to check bounds on inputs, arguments
o Write past arguments on the stack into the return address on stack
o When routine returns from call, returns to hacked address
n Pointed to code loaded onto stack that executes malicious code
∑ Port scanning
o Automated attempt to connect to a range of ports on one or a range of IP
addresses
o Detection of answering service protocol
o Detection of OS and version running on system
o nmap scans all ports in a given IP range for a response
o nessus has a database of protocols and bugs (and exploits) to apply against a
system
o Frequently launched from zombie systems
4 To decrease trace-ability
∑
Encryption
∑ Encryption algorithm consists of
o Set K of keys
o Set M of Messages
o Set C of ciphertexts (encrypted messages)
o A function E : K → (M→C). That is, for each k Œ K, E(k) is a function for
generating ciphertexts from messages
4 Both E and E(k) for any k should be efficiently computable functions
o A function D : K → (C → M). That is, for each k Œ K, D(k) is a function for
generating messages from ciphertexts
4 Both D and D(k) for any k should be efficiently computable functions
∑ An encryption algorithm must provide this essential property: Given a ciphertext c Œ C, a
computer can compute m such that E(k)(m) = c only if it possesses D(k)