You are on page 1of 106

Review for the

Final Exam
XIPING LIU
SCHOOL OF COMPUTER SCIENCE,
NANJING UNIVERSITY OF POSTS AND
TELECOMMUNICATIONS
What have we learned?

Operating system File System

Process/CPU
Management I/O Device Management

Memory
Management
Basic Information 3

• Fill-in-the-blank (20 marks, 2 marks for each blank)

• True or False (20 marks, 2 marks each)

• Short Answers (20 marks, 5 marks each)

• Comprehensive Questions (40 marks, 4 questions)

• Chap1 – 4; Chap2 –8 ; Chap3 – 31; Chap 4 –59 ; Chap 5

– 79; Chap 6 -86


Important Content in Chapter 1
4

• Summary
• Shell and GUI, Two Roles, CPU modes (Kernel mode and user
mode), Time and space multiplexing
• Multiprogramming and SPOOLing
• Program counter and PSW, Cash hit, RAM, ROM, Interrupt,
BIOS
• Process, interprocess communication, UID/GID, file,
directory, path name, working directory, block/ character
special files
• System Calls, TRAP instruction
1.1 What Is An Operating System5
• Two types of user interface

– Commands based on text, i.e. shell

– GUI (Graphical User Interface)

• The operating system, the most fundamental piece of software,


runs in kernel mode (also called supervisor mode)

– OS has complete access to all the hardware and can execute


any instruction the machine is capable of executing

• The rest of the software runs in user mode, in which only a


subset of the machine instructions is available
1.1 What Is An Operating System6

• Operating System As an Extended Machine


– Real hardware is very complicated and present
difficult and inconsistent interfaces to programmers
– The major tasks of the operating system:
• hide the ugly hardware

• provide abstractions——present nice, clean, elegant, and


consistent abstractions to programs (and end users)
1.1 What Is An Operating System7

• Operating System As a Resource Manager


– Modern OSs allow multiple programs to be in
memory and run during the same period
– Resource sharing in two different ways:
• In time / time multiplexing: CPU, printer

• In space/ space multiplexing: main memory, disk


Important Content in Chapter 2
8
• Summary
• Process, creation and termination, states and transitions,
process control blocks/PCB, interrupt vector
• Thread/light weight process, user-level thread, kernel-level
thread
• IPC, race condition, critical region/section, mutual exclusion,
busy waiting, problem-consumer problem/ bounded-buffer
problem, semaphore, atomic action, mutex
• Compute-bound process and I/O-bound process, different
goals and algorithms for batch/interactive/real time systems
(especially FCFS, SJF, SRTN, round-robin, priority scheduling),
context/process switch
• Solutions for philosopher and R-W problem
2.1 Processes 9

• Process Hierarchies
– In UNIX
• A process and all of its children and further descendants
together form a process tree
• All the processes in the whole system belong to a single
tree , with process init at the root
2.1 Processes 10

• Process Hierarchies
– In Windows
• Has no concept of a process hierarchy
• All processes are equal
• Parent can use a special token (called a handle) to
control the child
• It is free to pass this token to some other process
2.1 Processes 11

• Process States
– Three basic states
• Running (actually using the CPU at that instant)
• Ready (runnable; temporarily stopped to let another
process run)
• Blocked (unable to run until some external event
happens), also called waiting/sleeping state
– Five basic states
• New, Exit
– Seven basic states
• Suspend
2.1 Processes 12

• Process States
– Process State Transitions Transition 1 occurs
when the operating
system discovers that
a process cannot
continue right now.
Processes can
execute a system call
pause to get into
blocked state

When a process reads from a pipe or special file (e.g., a


terminal) and there is no input available, the process is
automatically blocked
2.1 Processes 13

• Process States
– Process State Transitions: Transition 2&3 are caused
by process scheduler Transition 2 occurs
when the process
scheduler decides
that the running
process has run long
enough, and it is time
to let another process
have some CPU time
Transition 3 occurs when process scheduler choose this
process by certain scheduling algorithm
2.1 Processes 14

• Process States
– Process State Transitions

Transition 4 occurs
when the external
event for which a
process was waiting
(such as the arrival of
some input) happens

If no other process is running at that instant, transition 3


will be triggered and the process will start running.
Otherwise it may have to wait in ready state for a little
while until the CPU is available and its turn comes
2.3 Interprocess Communication
15
---Semaphores
• Semaphore
– An integer variable to indicate the situation about
certain resource
– 0, no available resource
– a positive value n, means n available resources
– a negative value –n, means n blocked processes
waiting for the resource
– Two operations
• Down (P operation in Dutch)
• Up (V operation in Dutch)
2.3 Interprocess Communication
16
---Semaphores
• Down
– Decrement the value of the semaphore addressed
– If the value is negative, the process is put to sleep
• Up
– Increment the value of the semaphore addressed
– If the value is 0 or less, i.e. one or more processes
were sleeping on that semaphore, one sleeping
process is chosen and waked by the system
2.3 Interprocess Communication17
---Semaphores
• Atomic action
– An indivisible sequence of primitive operations
that must complete without interruption
– It is guaranteed that once a semaphore operation
has started, no other process can access the
semaphore until the operation has completed or
blocked
– The operation of down and up are indivisible as
atomic actions
2.4 Scheduling
---Introduction to Scheduling
• Compute-bound/intensive processes
– Spend most of their time computing
– Compute-bound processes typically have long CPU bursts
and thus infrequent I/O waits
• I/O-bound/intensive processes
– Spend most of their time waiting for I/O
– I/O-bound processes have short CPU bursts and thus
frequent I/O waits
2.4 Scheduling
---Introduction to Scheduling 19

• When to schedule
– when a new process is created---run the parent
process or the child process
– When a process exits
– when a process blocks on I/O, on a semaphore, or
for some other reasons
– when an I/O interrupt occurs, a scheduling decision
may be made
• Clock interrupt
2.4 Scheduling
---Scheduling in Batch System 20

• First-Come, First-Served (Non-preemptive)


– Processes are assigned the CPU in the order they request it
– When the first job enters the system, it is started
immediately and allowed to run as long as it wants to
– As other jobs come in, they are put onto the end of the
queue
– When a blocked process becomes ready, like a newly arrived
job, it is put on the end of the queue, behind all waiting
processes
2.4 Scheduling
---Scheduling in Batch System 21

• Shortest Job First(Non-preemptive)


– When several equally important jobs are sitting in the input
queue waiting to be started, the scheduler picks the
shortest job first
– Shortest job first is provably optimal
• Suppose four jobs, with execution times of a, b, c, d. The
first job finishes at time a, the second at time a + b, and
so on. The mean turnaround time is (4a + 3b + 2c + d)/4
– Optimal only when all the jobs are available simultaneously
2.4 Scheduling
---Scheduling in Batch System 22

• Shortest Remaining Time Next (Preemptive)


– Choose the process whose remaining run time is the
shortest
– When a new job arrives, its total time is compared to the
current process’ remaining time. If the new job needs less
time to finish than the current process, the current process
is suspended and the new job started
– Allows new short jobs to get good service
– For the above example with 5 jobs, Average turnaround
time is?
2.4 Scheduling
---Scheduling in Batch System 23
• Shortest Job First(Non-preemptive)
– E.g. Four jobs arrived at almost the same time
• A-10, B-8, C-6, D-4
• Average turnaround time with FCFS is 20 minutes
• Average turnaround time with SJF is 15 minutes
FCFS :
ABCD: ((10-0) + (18-0) + (24-0) + (28-0) )/4 = 80/4 = 20

CPU A B C D
10 18 24 28
A× B× C× D×
SJF:
DCBA: ((4-0) + (10-0) + (18-0) + (28-0) )/4 = 60/4 = 15
CPU D C B A
4 10 18 28
D× C B× A×
2.4 Scheduling
---Scheduling in Batch System 24

A/5 B/2 C/3 D/1 E/7


Arrival time 0 1 3 4 5
FCFS
ABCDE: ((5-0) + (7-1) + (10-3) + (11-4) + (18-5))/5 = 38/5 =
7.6
CPU A B C D E
5 7 10 11 18
A× B× C× D× E×
SJF
ADBCE: ((5-0) + (6-4) + (8-1) + (11-3) + (18-5))/5 =
35/5 =A7 D B C E
CPU
5 6 8 11 18
A×D B× C E×
× ×
2.4 Scheduling
---Scheduling in Batch System 25

A/5 B/2 C/3 D/1 E/7


Arrival time 0 1 3 4 5

SRTN
ABCDCAE: ((11-0) + (3-1) + (7-3) + (5-4) + (18-5))/5=31/5 =
6.2

CPU A B C D C A E
1 3 4 5 7 11 18
B× D× C× A× E×
A: 4 A: 4 A: 4 A: 4 A: 4 E: 7
B: 2 C: 3 C: 2 C: 2 E: 7
D:1 E: 7
2.4 Scheduling
---Scheduling in Batch System 26
A/5 B/2 C/3 D/1 E/7
Arrival time 0 1 5 7 8
FCFS---ABCDE: ((5-0) + (7-1) + (10-5) + (11-7) + (18-8))/5 =
30/5 =A6
CPU B C D E
5 7 10 11 18
A× B× C× D× E×
SJF---ABDCE: ((5-0) + (7-1) + (11-5) + (8-7) + (18-8))/5 =
28/5 = 5.6
CPU A B D C E
5 7 8 11 18
A× B×D C E×
× + (11-5)
SRTN---ABADCE: ((7-0) + (3-1) × + (8-7) + (18-8))/5
=26/5 = 5.2
CPU A B A D C E
1 3 7 8 11 18
B× A×D C E×
2.4 Scheduling
---Scheduling in Interactive System 27

• Round-Robin Scheduling
– Each process is assigned a time interval, called its quantum,
during which it is allowed to run
– If the process is still running at the end of the quantum, the
CPU is preempted and given to another process
– If the process has blocked or finished before the quantum
has elapsed, the CPU switching is done when the process
blocks
Assignments 28
• Five batch jobs (A through E, all completely CPU bound have
estimated running times of 10, 5, 2, 3, and 7 minutes. Their
arrival time is 0, 2, 3, 5, 6. For each of the following scheduling
algorithms, determine the mean process turnaround time
(Ignore process switching overhead)
– Round robin scheduling (assume that the system is
multiprogrammed, and that each job gets its fair share of
the CPU) (A quantum = 2 minute)
– FCFS (in order A, B, C, D, and E), SJF, SRTN (Assume that only
one job at a time runs, until it finishes)
Chapter 2 Processes and Threads
2.1 Processes 29
Execution time of A~E: 10, 5, 2, 3, and 7
minutes
Arrival Robin:
Round time ofquantum
A~E: 0, 2,=3,2 5,
minand 6
27:
CPU A A B B C C D D A A B B D E EA A B E EA A A A×
EE 10 17 22E 25
A
6: 15: 20: 25:
C× D× B× E×27:
CPU A B B A C B B D D E E A B D E E A AE E A EE×
A 2: A A C A 10 20 A 30
3: A 8: 17: 26:
C4: C C8: ×D E A B18:
× A×
B 5: C B D D×
6: B D E A (5+15+13+26+21)/5=80/5=
16
Chapter 2 Processes and Threads
2.1 Processes 30
Execution time of A~E: 10, 5, 2, 3, and 7 minutes
Arrival time of A~E: 0, 2, 3, 5, and 6
FCFS:
(10+13+14+15+21)/5=73/5=14.6
A B C D E
CPU
5 10 15 20 25 30
10: A×15: 17: 20: 27:
SJF: ACDBE B× C× D× E×
(10+9+10+18+21)/5=68/5=13.6

CPU A C D B E
5 10 15 20 25 30
10: A× 15: 20:B× 27:
12: D× E×
Chapter 2 Processes and Threads
2.1 Processes 31
Execution time of A~E: 10, 5, 2, 3, and 7 minutes
Arrival time of A~E: 0, 2, 3, 5, and 6
SRTN:
(2+3+10+13+27)/5=55/5=11

CPU A B C D B E A
2: 5: C×8: D×12:B× 19: 27:
A-8 5: 6: E× A×
B-5 A-8A-8
3: B-4 B-4
A-8D-3D-2
B-4 E-7
C-2
Important Content in Chapter 3
32

• Summary

• Address space, base and limit registers, swapping, Bitmap,


Linked list

• Virtual memory, MMU, relocate: virtual address  physical


address, page fault, information in a page table entry, TLB
(translation lookaside buffer)/Associate Memory, multi-level
page tables

• Page replacement algorithms such as Second-chance,


Clock…(especially FIFO, LRU, OPT)
3.2 Address Spaces
---Managing Free Memory 33

• When memory is assigned dynamically, the


operating system must manage it
• Two ways to keep track of memory usage
– Bitmaps
– Linked lists
3.2 Address Spaces
---Managing Free Memory 34

• Memory managing with Bitmaps


– Memory is divided into allocation units (ranging
from a few words to several kilobytes)
– Use a bit in the bitmap to represent the state of an
allocation unit
• 0 means the unit is free
• 1 means the unit is occupied
– To bring a k-unit process into memory in the
contiguous way, the memory manager must search
the bitmap to find k consecutive 0 bits in the map
3.2 Address Spaces
---Managing Free Memory 35

• Memory managing with Bitmaps

(a)A part of memory with five


processes and three holes. The
tickmarks show the memory
allocation units. The shaded regions
(0 in the bitmap) are free
(b)The corresponding bitmap
3.2 Address Spaces
---Managing Free Memory 36

• Memory managing with Bitmaps


– The size of the allocation unit is an important
design issue
• The smaller the allocation unit, the larger the bitmap

• If the allocation unit is chosen large, the bitmap will be


smaller. However, memory may be wasted in the last unit
of the process if the process size is not an exact multiple
of the allocation unit
3.2 Address Spaces
---Managing Free Memory 37

• Memory managing with Linked Lists


– Maintain a linked list of allocated and free memory
segments
– A segment contains a hole (H) or process (P)
– Each entry in the list specifies the address at which
the segment starts, the length, and a pointer to the
next item
3.2 Address Spaces
---Managing Free Memory 38

• Memory managing with Linked Lists

(a)A part of memory with five


processes and three holes. The
tickmarks show the memory
allocation units. The shaded regions
(0 in the bitmap) are free
(b)The corresponding bitmap
3.2 Address Spaces
---Managing Free Memory 39

• Memory managing with Linked Lists


– The segment list is kept sorted by address
• Sorting this way make the list updating straightforward
when a process terminates or is swapped out

– Several algorithms can be used to allocate memory for a


process, such as first/next/best/worst/quick fit
3.3 Virtual Memory
---Paging 40

• An example
– Suppose the page size is 4KB
– 16-bit address supplies virtual addresses from 0 up to 64K−1
– The virtual address space with 64KB can be divided into 16
virtual pages
– Suppose the computer has only 32 KB of physical memory
(8 page frames). So a 64-KB program cannot be loaded into
memory entirely
– A complete copy of a program’s core image, up to 64 KB,
must be present on the disk, however, so that pieces can be
brought in as needed virtual address space consists of fixed-
size units
3.3 Virtual Memory
---Paging 41
va 32780?
• An example
– Map a virtual address onto a
Byte 12 in
physical memory address
virtual page
– va/4k=n…m 8
– Use n to check page table and
get the page frame number x
– pa=4k*x+m
• va 0 – pa 8192
• va 8192 – pa 24K=24576
• va 20500 – pa
12K+20=12308 The relation between virtual pages
• va 4300? and page frames is given by the
3.3 Virtual Memory
---Paging 42

• Page fault
– Trap to the operating system when a running
program access a virtual page which is not loaded
into main memory
– The operating system picks a page frame (Page
replacement algorithms) and writes its contents
back to the disk
– It then fetches (also from the disk) the page that
was just referenced into the page frame just freed,
changes the map, and restarts the trapped
instruction
3.3 Virtual Memory
---Paging 43

• Why use a page size


that is a power of 2?
• The incoming 16-bit
binary virtual address is
split into a 4-bit page
number (for 16 pages)
and a 12-bit offset (0-
4095)
• 8196(D) =0010 0000
0000 0000(B)
• 0110 0000 0000
0000(B) The internal operation of the
MMU with 16 4-KB pages
3.3 Virtual Memory
---Speeding Up Paging 44

• Only a small fraction of the page table entries are


heavily read
– Most programs tend to make a large number of references
to a small number of pages
• Use a small hardware device for mapping virtual
addresses to physical addresses without going
through the page table in memory---TLB (Translation
Lookaside Buffer )
– Sometimes called Associative Memory
– Usually inside the MMU and consists of a small number of
entries (rarely more than 256)
– Improve virtual address mapping speed
3.3 Virtual Memory
---Pages Tables for Large Memories45
• The page table can be very large
– At most 220 entries for a process in a 32-bit computer and 252
for 64-bit, if the page size is 4KB (212B)
• Use multilevel page table
– Only partial page tables referenced recently stay in memory
– E.g. a 32-bit virtual address is partitioned into a 10-bit PT1
field, a 10-bit PT2 field, and a 12-bit Offset field (used to
locate the content in a page and the page size is 4KB)
• Use inverted page tables
– One entry per page frame in real memory
– E.g. 4KB page size and 4GB of RAM, requires only 1 million
entries for all page frames, each one keeps track of which
(process, virtual page) is located in the page frame
3.4 Page Replacement 46
Algorithms
• Page Replacement
– When a page fault occurs, the operating system has to
choose a page to evict (remove from memory) to make
room for the incoming page
– If the page to be removed has been modified while in
memory, it must be rewritten to the disk to bring the
disk copy up to date
– If, however, the page has not been changed (e.g., it
contains program text), the page to be read in just
overwrites the page being evicted
3.4 Page Replacement Algorithms
---Not Recently Used PRA 47
• Algorithm Description of NRU
– removes a page at random from the lowest-
numbered nonempty class
• Class 0: not referenced, not modified.
• Class 1: not referenced, modified.
• Class 2: referenced, not modified.
• Class 3: referenced, modified
• Referenced pages are in the higher levels, as It is
better to remove a modified page that has not been
referenced in at least one clock tick (typically about
20 ms) than a clean page that is in heavy use
• Easy to understand, efficient to implement,
adequate performance
3.4 Page Replacement Algorithms
---First-In, First-Out PRA 48
• Algorithm Description of FIFO
– Maintains a list of all pages currently in memory,
with the most recent arrival at the tail and the
least recent arrival at the head
– On a page fault, the page at the head is removed
and the new page added to the tail of the list
• Easy to implement
• Disadvantage: The oldest page may still be useful.
Thus, FIFO in its pure form is rarely used
3.4 Page Replacement Algorithms
---Second-Chance PRA 49

• Algorithm Description of Second-Chance


– A simple modification to FIFO that avoids the problem of
throwing out a heavily used page is to inspect the R bit
of the oldest page
– If the R bit is 0, the page is both old and unused, so it is
replaced immediately
– If the R bit is 1, the bit is cleared, the page is put onto the
end of the list of pages, and its load time is updated as
though it had just arrived in memory. Then the search
continues
3.4 Page Replacement Algorithms
---Least Recently Used page PRA 50
• A good approximation to the optimal algorithm is:
– Pages that have been heavily used in the last
few instructions will probably be heavily used
again soon
– Pages that have not been used for ages will
probably remain unused for a long time
• Algorithm Description of LRU
– When a page fault occurs, throw out the page that has
been unused for the longest time
– High overhead to implement
3.4 Page Replacement
Algorithms 51
---Summary of PRAs

C
FIFO D
Second-chance C
LRU B
3.4 Page Replacement
Algorithms 52
---Summary of PRAs

Local versus global page replacement.


(a) Original configuration
(b) Local page replacement
(c) Global page replacement
53
Page reference order: 2 3 2 1 5 2 4 5 3 2
52
Page frames: 3, all empty
Local page replacement

OPT
F=6
54
Page reference order: 2 3 2 1 5 2 4 5 3 2
52
Page frames: 3, all empty
Local page replacement

OPT: F=6
FIFO
F=9
55
Page reference order: 2 3 2 1 5 2 4 5 3 2
52
Page frames: 3, all empty
Local page replacement

OPT: F=6, FIFO: F=9


LRU
F=7
Chapter 3 Memory Management
2.1 Processes 56
1. 11000 (Decimal)
(1) Page size-4KB: 11000/4096=2…2808
Check the page table and get 6
6*4096+2808=27384(D)
(2) Page size-8KB: 11000/8192=1…1808
Check the page table and get 1
1*8192+2808=11000(D)
2. 5A5C (Hexadecimal)
5 A 5 C 5 A 5 C
0101| 1010 0101 1100 010|1 1010 0101 1100
0011| 1010 0101 1100 110|1 1010 0101 1100
4KB: 3 A 5 C 8KB: D A 5 C
Chapter 3 Memory Management
2.1 Processes 57
1. 11000 (Decimal)
(1) Page size-4KB: 11000/4096=2…2808
Check the page table and get 6
6*4096+2808=27384(D)
(2) Page size-8KB: 11000/8192=1…1808
Check the page table and get 1
1*8192+2808=11000(D)
2. 5A5C (Hexadecimal)
5 A 5 C 5 A 5 C
0101| 1010 0101 1100 0101 1|010 0101 1100
0011| 1010 0101 1100 0011 1|010 0101 1100
4KB: 3 A 5 C 2KB: 3 A 5 C
Chapter 3 Memory Management
2.1 Processes 58
3. Page sequence: 0 3 0 1 5 2 4 5 0 3 0 5
OPT 0 3 3 1 5 5 5 5 5 5 5 5
0 0 0 1 2 4 4 4 4 4 4
0 3 3 3 3 3 3 3 3
0 0 0 0 0 0 0 0
Page fault: * * * * 1 2
F=6;R=6/12=50%

Page sequence: 0 3 0 1 5 2 4 5 0 3 0 5
FIFO 0 3 3 1 5 2 4 4 0 3 3 5
0 0 3 1 5 2 2 4 0 0 3
0 3 1 5 5 2 4 4 0
0 3 1 1 5 2 2 4
Page fault: * * * * 0 3 1 5 2
F=9;R=9/12=75%
Chapter 3 Memory Management
2.1 Processes 59
Page sequence: 0 3 0 1 5 2 4 5 0 3 0 5
LRU 0 3 3 1 5 5 5 5 5 5 5 5
0 0 3 1 1 1 1 0 0 0 0
0 3 2 2 2 2 3 3 3
0 0 4 4 4 4 4 4
Page fault: * * * * 3 0 1 2
F=8;R=8/12=67%
Important Content in Chapter 4
• Summary 60

• File naming, structures, types (regular one, directory,


block/character special file), attributes, access (sequential
and random), operations
• Directory, absolute path name, relative path name, the
working/current directory, operations
• File implementation (contiguous, linked list, FAT, i-node),
directory implementation (content of an entry, limitation on
name length and solutions, looking up a file), sharing a file,
VFS
• MS-DOS File system - FAT family (12, 16, 32), UNIX V7 (how
to look up a file)
4.1 Files
---3. File Types 61
• Four main types
– Regular files: Contains user information
– Directories: Maintains the structure of the file system
– Character special files: Related to serial I/O devices, such as
terminals, printers, and networks
– Block special files: Related to disks
• Windows has regular files and directories
• UNIX has all types of files
4.1 Files
---3. File Types 62
• Regular files are generally either ASCII files or binary
files
• ASCII files consist of lines of text
– Lines need not all be of the same length
– ASCII files can be displayed, printed and edited
• Binary files have some internal structure
– Executable file: Consist of five sections--- header, text, data,
relocation bits, and symbol table
– Archive file: consist of library procedures (modules)
compiled but not linked
–…
4.2 Directories 63

• Directories
– To keep track of files, file systems normally have
directories or folders
– A directory is a special file, which content is the
information about contained files
– In this section we will discuss directories, their
organization, their properties, and the operations
that can be performed on them
4.2 Directories
---3. Path Names 64
• When the file system is organized as a directory tree,
two ways are used for specifying a file name
– An absolute path name
• Consisting of the path from the root directory to the file
– E.g. /usr/ast/
• Unique and always start at the root directory (start with
a separator)
– Windows: \usr\ast\mailbox
– UNIX: /usr/ast/mailbox
– MULTICS: >usr>ast>mailbox
– A relative path name
4.2 Directories
---3. Path Names 65
• When the file system is organized as a directory tree,
two ways are used for specifying a file name
– A relative path name
• Used in conjunction with the concept of the working
directory (also called the current directory)
• A user can designate one directory as the current
working director
– All path names not beginning at the root directory
are taken relative to the working directory
• E.g. if the current working directory is /usr/ast
– cp /usr/ast/mailbox /usr/ast/mailbox.bak
– cp mailbox mailbox.bak is the same
4.2 Directories
---3. Path Names 66
• Usage of Different Path Name Methods
– The absolute path name will always work
– Some programs need to access a specific file without
regard to what the working directory is
• E.g. a spelling checker might need to read
/usr/lib/dictionary to do its work
– It should use the full, absolute path name in this case
because it does not know what the working directory
will be when it is called
• If the checker needs to read a large number of files from
/usr/lib, a system call can be issued to change the working
directory to /usr/lib. Then relative paths can be used
4.2 Directories
---3. Path Names 67
• Directory Tree
– Most operating systems that support a hierarchical
directory system have two special entries in every
directory, ‘‘.’’ and ‘‘..’’
• ‘‘.’’ refers to the current directory
• ‘‘..’’ refers to its parent
– Use .. to go higher up the tree
4.2 Directories
---3. Path Names 68
• Directory Tree
– E.g. A certain process has
/usr/ast as its working
directory. If it copies the file
/usr/lib/dictionary to its own
directory, following
commands can be used:
• cp ../lib/dictionary .
• cp /usr/lib/dictionary .
• cp /usr/lib/dictionary
dictionary
• cp /usr/lib/dictionary
/usr/ast/dictionary
4.3 File System Implementation
---2. Implementing Files 69

• Various methods to keep track of which disk blocks go


with which file
– Contiguous Allocation

– Linked-List Allocation

– Linked-List Allocation Using a Table in Memory

– i-Nodes
4.3 File System Implementation
---Implementing Files 70
• Linked-List Allocation Using a Table in Memory
– Keep each file as a linked list of disk blocks
– Put the pointer word to next block in a table in memory
(called FAT)

terminated with a
special marker
(e.g., −1) that is
not a valid block
number
4.3 File System Implementation
---Implementing Files 71
• Linked-List Allocation Using a Table in Memory
– Take the pointer word from each disk block and put it in a
table in memory. This table is called a FAT (File-Allocation
Table)
– Advantages
• The entire block is available for data
• Random access is much easier
• The directory keeps a single integer (the starting block
number) to locate all the blocks
– Disadvantages
• The entire table must be in memory all the times
– FAT is used in MS-DOS and supported by Windows
4.3 File System Implementation
---2. Implementing Files
• i-nodes
– Each file is
associated with
an i-node (index-
node)
– The i-node lists
the attributes
and disk
addresses of the
file’s blocks
4.3 File System Implementation
---2. Implementing Files
• i-nodes
– Each file is associated with an i-node which lists the
attributes and disk addresses of the file’s blocks
– Advantages
• The i-node need to be in memory only when the
corresponding file is open
– If each i-node occupies n bytes and a maximum of k
files may be open at once, only kn bytes need be
reserved in advance
• The array of i-nodes is usually far smaller than FAT’s
– The FAT table for holding the linked list of all disk
blocks is proportional in size to the disk itself
4.3 File System Implementation
---3. Implementing Directories
• How to Locate File Data
– When a file is opened, the operating system uses the path
name supplied by the user to locate the directory entry on the
disk
– The main function of the directory system is to map the ASCII
name of the file onto the information needed to locate the
data
– The directory entry provides the information needed to find
the disk blocks
• The number of the first block and size(Contiguous allocation)
• The number of the first block (Both linked-list schemes)
• Or the number of the i-node
4.3 File System Implementation
---3. Implementing Directories
• Attributes Storage
– Every file system maintains various file attribute
– One solution is to store them directly in the directory entry
• A directory consists of a list of fixed-size entries, one per
file, containing a file name, a structure of the file
attributes, and one or more disk addresses telling where
the disk blocks are

– For systems that use i-nodes, the directory entry can be


shorter: just a file name and an i-node number
4.5 Example File Systems
---UNIX V7 File System 76
• i-node
– Contains the disk address of the file
Each item provides a block number
These blocks are used to
store the content of a file

What’s the maximum size of a file if block


size is 2/4/8KB and an address takes 4/8B
4.5 Example File Systems
---UNIX V7 File System 77
000000AFH: logic block number0, PBN8, PA000080AFH
000060AFH: LBN6, PBN255, PA000FF0AFH
0000A0AFH: LBN10, PBN258, PA001020AFH
Each item provides a block number
1034
0 0 8 1035
1 1 104 …
2 2 40 2057
3 3 0 258
24
4 4 1 44
65 0
5 5 … …
333 1
6 6 255 0 111 …
7 7 14 1023 7
1 222
8 8 52 … 1023

9 9 37 10 0
10 83 11 777
1023 1
11 66 … …
12 17 1033
1023

Block size=4KB, an address takes 4B


Each address block provides 1024 items
Where is Byte 000000AFH, 000060AFH,
0000A0AFH, And 004000AFH?
Chapter 4 File Systems
2.1 Processes 0
78
1 7
2. Consider a disk that has 16 data. Let there be 2 2 -1
files on the disk: f1 and f2. The first blocks of f1 and 3 -1
f2 are 1 and 14. Given the FAT table entries, what 4 9
5
are the data blocks allotted to f1 and f2 (in
sequence)? 6 2
7 4
8
f1: 174962 9 6
f2: 1410123 10 12
11
12 3
13
14 10
15
Chapter 4 File Systems
2.1 Processes 79

A UNIX file system has 8-KB blocks and 8-byte disk


addresses. What is the maximum file size if i-nodes
contain 10 direct entries, and one single, double, and
triple indirect entry each?

The file size in direct entries: 10*8KB=80KB


The file size in single indirect entries: 8K/8*8KB=8MB
The file size in double indirect entries: (8K/8)2*8KB=8GB
The file size in double indirect entries: (8K/8)3*8KB=8TB
The maximum file size is: 8TB+8GB+8MB+80KB (final
result)
(Provides 1G+1M+1K+10 addresses)
Important Content in Chapter 5
80
• Summary
• Block devices and character devices, huge range in speeds,
device controller/adapter, memory-mapped I/O, I/O port, I/O
port space, DMA (Direct Memory Access), interrupts, interrupt
vector
• Goals of the I/O software (device independent, uniform naming,
error handling, synchronous/blocking, asynchronous/interrupt-
driven, buffering), programmed I/O, polling/busy waiting,
interrupt-driven I/O, I/O using DMA
• User-level I/O software (Spooling, daemon, library procedures),
device-independent operating system software (Uniform
interfacing for device drivers, buffering, error reporting,
allocating and releasing dedicated devices, providing a device-
independent block size), device drivers, interrupt handlers
5.1 Principles of I/O Hardware
---I/O Devices 81

• I/O devices can be roughly divided into two categories:


block devices and character devices
• Block devices
– A block device is one that stores information in fixed-size
blocks, each one with its own address
– Common block sizes range from 512 to 65,536 bytes
– All transfers are in units of one or more entire (consecutive)
blocks
– The essential property: It is possible to read or write each
block independently of all the other ones
– Hard disks, Blu-ray discs, and USB sticks
5.1 Principles of I/O Hardware
---I/O Devices 82

• Character devices
– A character device delivers or accepts a stream of characters,
without regard to any block structure
– It is not addressable and does not have any seek operation
– Printers, network interfaces, mice (for pointing), and most
other devices that are not disk-like can be seen as character
devices
• Some devices are neither block nor character devices
– Clock: Cause interrupts at well-defined intervals
5.1 Principles of I/O Hardware
---Device Controllers 83

• I/O units often consist of a mechanical


component and an electronic component
– The electronic component is called the device
controller or adapter
• On personal computers, it often takes the form of a chip
on the parentboard or a printed circuit card that can be
inserted into a (PCIe) expansion slot
– The mechanical component is the device itself
5.1 Principles of I/O Hardware
---Device Controllers 84

• Many controllers can handle two, four, or even eight


identical devices
• If the interface between the controller and device is a
standard interface, either an official ANSI, IEEE, or ISO
standard or a de facto one, then companies can make
controllers or devices that fit that interface
– Many companies, for example, make disk drives that match
the SATA, SCSI, USB, Thunderbolt, or FireWire (IEEE 1394)
interfaces
5.2 Principles of I/O Software
---Programmed I/O 85

• Three different ways that I/O can be performed


– Programmed I/O
• The simplest form of I/O is to have the CPU do
all the work
– Interrupt-driven I/O
• CPU can do something else while the device is
not ready
– I/O using DMA
–DMA does all the work instead of CPU
(check more detail on slide 30-38 in ppt for Chap 5)
5.3 I/O Software Layers
• Four layers 86
– Each layer has a well-defined function to perform and a
well-defined interface to the adjacent layers
Important Content in Chapter 6
• Summary 87
• Preemptable and non-preemptable resources, three
abstract steps to use a resource
• Definition of deadlock, resource deadlock, four necessary
conditions of a resource deadlock, resource graphs (circles
for processes, squares for resources, two kinds of directed
arcs)
• Use resource graphs to detect a deadlock with one resource
of each type, a matrix-based algorithm to detect a deadlock
with multiple resources of each type (E, A, C, R), recovery
from deadlock (preemption, rollback, killing processes)
• Safe and unsafe states, banker’s algorithm (E, A, C, R)
• Deadlock prevention by attacking four necessary conditions
6.1 Resources
---Preemptable and Nonpreemptable88
Resources
• Preemptable resources
– Can be taken away from the process owning it with no ill
effects
– Memory on a PC is an example of a preemptable resource
• Nonpreemptable resources
– Cannot be taken away from its current owner without
potentially causing failure
– E.g., printer, scanner…
• In general, deadlocks involve nonpreemptable resources
– Potential deadlocks that involve preemptable resources can
usually be resolved by reallocating resources from one
process to another
6.2 Introduction to Deadlocks
89

• Deadlock can be defined formally as follows


– A set of processes is deadlocked if each process in the set is
waiting for an event that only another process in the set can
cause
– Because all the processes are waiting, none of them will
ever cause any event that could wake up any of the other
members of the set, and all the processes continue to wait
forever
6.2 Introduction to Deadlocks
---Conditions for Resource Deadlocks90
• Four conditions must hold for there to be a (resource) deadlock:
– Mutual exclusion condition: Each resource is either currently
assigned to exactly one process or is available
– Hold-and-wait condition: Processes currently holding
resources that were granted earlier can request new
resources
– No-preemption condition: Resources previously granted
cannot be forcibly taken away from a process. They must be
explicitly released by the process holding them
– Circular wait condition: There must be a circular list of two
or more processes, each of which is waiting for a resource
held by the next member of the chain
• If one of them is absent, no resource deadlock is possible
6.2 Introduction to Deadlocks
---Deadlock Modeling
• These four conditions can be modeled using
Resource Graphs
– The graphs have two kinds of nodes: processes, shown
as circles, and resources, shown as squares
– A directed arc from a resource node (square) to a
process node (circle) means that the resource has
previously been requested by, granted to, and is
currently held by that process
– A directed arc from a process to a resource means that
the process is currently blocked waiting for that
resource
6.2 Introduction to Deadlocks
---Deadlock Modeling
• These four conditions can be modeled using Resource Graphs
– A cycle in the graph means that there is a deadlock involving
the processes and resources in the cycle (assuming that
there is one resource of each kind)
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
93 of
Each Type
• A matrix-based algorithm for detecting deadlock among n
processes, P1 through Pn
– Let the number of resource classes be m
– The existing resource vector E: It gives the total number of
instances of each resource in existence, with E1 denoting
resources of class 1, E2 denoting resources of class 2, and
generally, Ei denoting resources of class i (1 ≤ i ≤ m)
– The available resource vector A, with Ai giving the number
of instances of resource i that are currently available (i.e.,
unassigned)
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
94 of
Each Type
• A matrix-based algorithm for detecting deadlock among n
processes, P1 through Pn
– The current allocation matrix C
• The ith row of C tells how many instances of each
resource class Pi currently holds. Thus, Cij is the number
of instances of resource j that are held by process i
– The request matrix R
• Rij is the number of instances of resource j that Pi wants
– Every resource is either allocated or is available, so
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
95 of
Each Type
• A matrix-based algorithm
– Based on comparing vectors
• Define the relation A ≤ B on two vectors A and B to mean
that each element of A is less than or equal to the
corresponding element of B
• Mathematically, A≤B holds if and only if Ai ≤ Bi for 1≤ i≤m
– Each process is initially said to be unmarked
– As the algorithm progresses, processes will be marked,
indicating that they are able to complete and are thus not
deadlocked
– When the algorithm terminates, any unmarked processes
are known to be deadlocked
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
96 of
Each Type
• A matrix-based algorithm
– This algorithm assumes a worst-case scenario
• All processes keep all acquired resources until they exit
– The deadlock detection algorithm can be given as follows
• Step1: Look for an unmarked process, Pi, for which the
ith row of R is less than or equal to A
• Step2: If such a process is found, add the ith row of C to
A, mark the process, and go back to step 1
• Step3: If no such process exists, the algorithm terminate
– When the algorithm finishes, all the unmarked processes, if
any, are deadlocked
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
97 of
Each Type
• An example to apply the deadlock detection algorithm
– Three processes and four resource classes, which we have
arbitrarily labeled tape drives, plotters, scanners, and Blu-
ray drives
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource
98 of
Each Type
• An example to apply the deadlock detection algorithm
– Look for a process whose resource request can be satisfied
• P1: no CD Roms; P2: no scanners
• P3: (2100) ≤ A, so P3 runs and eventually returns all its
resources

P1 P1
P2 P2
P3 P3
6.4 Deadlock Detection and Recovery
---Deadlock Detection with Multiple Resource of
99
Each Type
• An example to apply the deadlock detection algorithm
– Look for a process whose resource request can be satisfied
• A=(2100)+(0120)=(2220) (P3 returns all its resources)
• P2: (1010) ≤ A, so P2 runs and eventually returns all its
resources (A=4221) and P1 can runs

(2 2 2 0)

P1 P1
P2 P2
P3 P3
6.4 Deadlock Detection and Recovery
10
---Deadlock Detection with Multiple Resource of
0
Each Type
• Another example to apply the deadlock detection algorithm
– Look for a process whose resource request can be satisfied
• No process can be satisfied

P1 P1
P2 P2
P3 2 1 0 1 P3
6.5 Deadlock Avoidance 10
---The Banker’s Algorithm for Multiple 1
Resources
• The algorithm for checking to see if a state is safe is stated as
follows (find a safe sequence for all processes):
– Step 1: Look for a row, R, whose unmet resource needs are
all smaller than or equal to A. If no such row exists, the
system will eventually deadlock since no process can run to
completion (assuming processes keep all resources until
they exit)
• If several processes are eligible to be chosen, it does not
matter which one is selected
– Step 2: Assume the process of the chosen row requests all
the resources it needs (which is guaranteed to be possible)
and finishes. Mark that process as terminated and add all of
its resources to the A vector
6.5 Deadlock Avoidance 10
---The Banker’s Algorithm for Multiple 2
Resources
• The algorithm is stated as follows:
– Step 3: Repeat steps 1 and 2 until either all processes are
marked terminated (in which case the initial state was safe)
or no process is left whose resource needs can be met (in
which case the system was not safe)
– E.g. Check to see if this state is safe: Yes! DA/E….
6.5 Deadlock Avoidance 10
---The Banker’s Algorithm for Multiple 3
Resources
– E.g., suppose B makes a request for one printer
• Check to see if granting this request leads to a safe state
– A=(1010), DA/E…, it’s safe
– The request is granted

0 1 1 0 0 1 0 2
(1010)
6.5 Deadlock Avoidance 10
---The Banker’s Algorithm for Multiple 4
Resources
– E.g., suppose E wants the last printer after B gets one
• Check to see if granting this request leads to a safe state
– A=(1000), no process can finish, unsafe
– The request is deferred for a while

0 1 1 0 0 1 0 2
(1000)

0 0 1 0 2 1 0 0
10
Exercises 5
• Consider the following state of a system with four processes, P1,
P2, P3, andP4, and five types of resources, RS1, RS2, RS3, RS4,
andRS5. Using the deadlock detection algorithm described in
Section 6.4.2, show that there is no deadlock in the system
– E = (24144), A = (01021) P2 (02031)
– C=01112 R= 1 1 0 2 1
P3 (12032)
01010 01021
P1 (13144)
10001 02031
11000 02110 P4 (24144)
• Suppose that P1 requests (0 0 0 0 1) A=(01020), no process
can be satisfied
– Is the state still safe? (Is there any safe sequence?) No
– Should this request be granted? No
QUESTIONS?

You might also like