Professional Documents
Culture Documents
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
-----------------------------
------------------------
Lesson 1: Overview of Operating
Systems
-----------------------------
------------------------
Operating systems
An operating system is software that
manages a computer’s hardware.
It also provides a basis for application Operating System Functions:
programs and acts as an intermediary 1. Booting
between the computer user and the 2. Memory Management
computer hardware. 3. Loading and execution
It provides an environment for programs 4. Data Security
to run. 5. Drive/disk management
6. Device Control
It is designed mostly for ease of use, 7. User interface
with some attention paid to performance 8. Process management
and security and none paid to resource
utilization.
Hardware: —the central processing unit Goals of an Operating System
(CPU), the memory, and the input/output Simplify the execution of user
(I/O) devices—provides the basic programs and make solving
computing resources for the system. problems easier
Application programs: such as word Use computer hardware
processors, spreadsheets, compilers, and efficiently
web browsers—define the ways in which Make applications more portable
these resources are used to solve users’ and versatile
computing problems Provide isolation, security, and
protection among user programs
Users: can be people, machines, or other Improve overall system reliability
computers
System Programs
Provides a convenient environment for 2. UNIX: beyond simple but not fully
program development and execution, can layered; consists of two parts, the
be divided into: file management, status kernel and the system programs
information, programming language
support, programming loading and
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
Process scheduling
Producer-Consumer Paradigm
*Thus, it’s possible to have concurrency *It allows multiple threads to run
w/o parallelism in parallel on multiprocessors
Types of Parallelism
3. Many-to-many Model:
1. Data Parallelism: focuses on multiplexes many user-level
distributing subsets of the same threads to a smaller or equal
data across multiple computing number of kernel threads
cores and performing the same
operation on each core. E.g. If * Although the many-to-many model
you’re adding the sum of a set 0- appears to be the most flexible of the
10, one core would add 0-5 and models discussed, in practice it is
another will add 6-10. difficult to implement.
Scheduling Criteria
CPU utilization: CPU must be as busy as Average Waiting Time: average length
possible of how long a process takes before
running, such that P1 above took 0 ms
Throughput: no. of processes per time waiting, P2 took 24 ms, and P3 took 27
unit ms; (P1 + P2 + Pn)/n
Turnaround time: time from submission
* The FCFS scheduling algorithm is
of a process to its completion
nonpreemptive.
Waiting time: sum of the periods spent
waiting in the ready queue
2) Shortest Job First Scheduling (SJF)
Response time: time from submission of
- this algorithm associates with each
a process to first response produced;
process the length of the process’s next
time it takes to start responding
CPU burst
Scheduling Algorithms
- the shortest burst time goes first
- The SJF algorithm can be either process being placed at the end of the
preemptive or nonpreemptive. The choice queue
arises when a new process arrives at the
- although the time quantum should be
ready queue while a previous process is
large compared with the context switch
still executing.
time, it should not be too large. As we
Preemptive SJF: also called shortest- pointed out earlier, if the time quantum is
remaining time first, stops a currently too large, RR scheduling degenerates to
running process if its remaining execution an FCFS policy
time is greater than burst time of
process next in line
4) Priority Scheduling
1) Memory Barriers
Memory Models
separate cores, they will be executed sign is changed back to "Vacant" by the
simultaneously, arbitrarily sequencing person inside after they finish using it.
them. Initialized to 0.
* "Test and Set" is a way for threads or
processes to check the state of a shared
variable and set it to a new value if a
certain condition is met.
Busy waiting: disadvantage of using Binary semaphore: its value can range
mutex locks, where a lot of processes only between 0 and 1. Thus, binary
loop continuously to acquire a lock that is semaphores behave similarly to mutex
unavailable locks.
Spinlock: the process “spins” while Counting Semaphore: its value can range
waiting for the lock to become available. over an unrestricted domain; can be used
No context switch is required when a to control access to a given resource
process must wait on a lock, and a consisting of a finite number of
context switch may take considerable instances, can be used to run two
time. processes at the same time
SYNCHRONIZATION PROBLEMS
-----------------------------------------------
------------
Liveness
Bounded Buffer Problem: commonly used
Liveness: refers to a set of properties
to illustrate the power of
that a system must satisfy to ensure that synchronization primitives; refer to
processes make progress during their producer-consumer problem
execution life cycle. A process waiting
indefinitely is an example of a “liveness Readers-Writers Problem: if a writer
failure.” and a reader wants to read a database
simultaneously, what will be shown?
An example of a liveness failure are
infinite loops, a busy wait loop can be an First Readers–Writers Problem: requires
example of a possibility of a liveness that no reader be kept waiting unless a
failure. writer has already obtained permission to
use the shared object. In other words, no
Below are situations that can lead to
reader should wait for other readers to
liveness failures.
finish simply because a writer is waiting.
1. Deadlock: when every process in the
set is waiting for an event that can be Second Readers–Writers Problem:
caused only by another process in the set, requires that, once a writer is ready, that
in short when a process is waiting for writer perform its write as soon as
possible. In other words, if a writer is
something that’s not going to happen. The
waiting to access the object, no new
“events” with which we are mainly
readers may start reading.
concerned here are the acquisition and
release of resources such as mutex locks
and semaphores.
Conditions:
Synchronization in Windows: mutexes,
Allow at most four philosophers to spinlocks, and semaphores are utilized.
be sitting simultaneously at the
table. Synchronization in Linux: The Linux
kernel is preemptive. Linux uses atomic
Allow a philosopher to pick up her integers for synchronization. It also uses
chopsticks only if both chopsticks spinlocks, mutexes, and semaphores.
are available (to do this, she must
pick them up in a critical section). Synchronization in Java: known for
monitors, reentry locks, semaphores, and
Use an asymmetric solution—that condition variables.
is, an odd-numbered philosopher
picks up first her left chopstick -----------------------------------------------
------------
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
In summary, if a resource-allocation
*If the graph contains no cycles, then no
graph does not have a cycle, then the
thread in the system is deadlocked. If
system is not in a deadlocked state. If
the graph does contain a cycle, then a
there is a cycle, then the system may or
deadlock may exist.
may not be in a deadlocked state.
* If each resource type has several
Methods for Handling Deadlocks
instances, then a cycle does not
necessarily imply that a deadlock has
Three main methods: ignore deadlocks
occurred. In this case, a cycle in the
happen, ensure deadlock never occurs,
graph is a necessary but not a sufficient
allow deadlocks but fix them
condition for the existence of deadlock.
during its lifetime. With this, the OS can in semaphores and mutexes, which are
decide if a thread must wait. the main causes of deadlocks.
Hold and Wait: we must guarantee that, Allocation State: The resource-
whenever a thread requests a resource, it allocation state is defined by the number
does not hold any other resources. A of available and allocated resources and
protocol says that all resources needed the maximum demands of the threads.
by a thread must be stated before
execution, which is impractical due to
dynamic allocation. Another says that a Deadlock Avoidance Algorithms
thread can only request a resource when
it has none. Safe State: if the system can allocate
resources to each thread (up to its
No Preemption: if a thread is waiting on maximum) in some order and still avoid a
a resource, the resource it already has is deadlock. More formally, a system is in a
released and it is placed into waiting. It safe state only if there exists a safe
will only restart if it has both the sequence. If no sequence exists, then the
resource that they need. state is unsafe.
*Can be used in those that can be saved An unsafe state may lead to a deadlock. A
and restored easily. It cannot be applied safe state is not a deadlocked state. A
deadlocked state is an unsafe state.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
-----------------------------
------------------------
Lesson 3: Memory Management
-----------------------------
------------------------
-----------------------------------------------
Base register: holds the smallest legal
------------ physical memory address
Main Memory Limit register: specifies the size of the
range
-----------------------------------------------
------------ For example, if the base register holds
300040 and the limit register is 120900,
Memory: consists of a large array of then the program can legally access all
bytes, each with its own address. addresses from 300040 through 420939
*Main memory and the registers built (inclusive).
into each processing core are the only
general-purpose storage that the CPU can *If a user process accesses a memory
access directly. address that is not theirs, an error
*Data (especially external ones, out of occurs.
the CPU) must be taken to the CPU for it
to be processed. *Context switches involve saving the
current process's state from registers
Stalling: registers that are built into
each CPU core are generally accessible into main memory and loading the next
within one cycle of the CPU clock; process's state from main memory into
therefore, a processor may need multiple the registers.
cycles to complete an operation thus
stalling is necessary Address Binding
Cache: solution for the aforementioned
problem, a fast memory between the CPU Addresses in the source program are
and main memory, typically on the CPU generally symbolic (such as the variable
chip count). A compiler typically binds these
symbolic addresses to relocatable
*Separate per-process memory space
protects the processes from each other
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
addresses (such as “14 bytes from the address, a virtual address is used. Logical
beginning of this module”). and virtual address are similar terms.
The linker (or loader) in turn binds the Logical address space: set of all logical
relocatable addresses to absolute addresses generated by a program
addresses such as 740169. Each binding is
a mapping from one address space to Physical address space: set of all physical
another. In short there is translation from addresses corresponding to the logical
user to hardware. addresses
The binding process may occur at any of Memory management unit (MMU): a
these stages: hardware device that maps virtual memory
during runtime to physical addresses
Compile time: if during compile time the
memory address is known, absolute code Relocation register: another term for
(code ready to be ran by the CPU) can be base register
generated. So, if location is already known,
the compiler will just comply. *The user program never accesses the
real physical addresses. It may create a
Load time: not known at compile time pointer, store it in memory, manipulate it,
where the process will reside in memory, and compare. Thus, the user program
then the compiler must generate deals with logical addresses.
relocatable code. In this case, final
binding is delayed until load time. *The memory mapping hardware converts
logical to physical.
Execution time: When a process can be
moved from one memory segment to
another during its execution, the binding Dynamic Loading: a routine is not loaded
of memory addresses must be delayed until it is called or needed.
until runtime.
Dynamically linked libraries (DLLs): are
system libraries that are linked to user
Logical address: address generated by programs when the programs are run.
CPU These libraries can be shared among
Physical address: address seen by the multiple processes so that only one
memory unit and loaded into the memory instance of the DLL in main memory. It is
address register of the memory also called shared libraries.
Addresses under execution time differs Static linking: linker combines all
from others such that instead of a logical necessary program modules into one
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D
Page fault: accessing an invalid marked now access the page as though it had
page; thus the OS has not loaded the always been in memory
desired page into memory
Page replacement: If no frame is free,
we find one that is not currently being
used and free it. Occurs during a page
fault where no free frame is available to
store from logical memory
Allocation of Frames
How do we allocate the fixed amount of
free memory among the various
processes? If we have 93 free frames
and two processes, how many frames does
each process get?
ALLOCATION ALGORITHMS:
Equal allocation: split m frames among n
processes to give everyone an equal m/n
frames. If 93 frames are shared among 5 Thrashing: A process is thrashing if it is
processes, each will get 18, with the spending more time paging than
extra 3 used as a free-frame buffer pool. executing. As you might expect, thrashing
results in severe performance problems
Proportional allocation: available memory
allocated to each process depends on its Causes of thrashing: As the degree of
size. With proportional allocation, we multi
would split 62 frames between two programming increases, CPU utilization
processes, one of 10 pages and one of 127 also increases, although more slowly,
pages, by allocating 4 frames and 57 until a maximum is reached.
frames, respectively, since
Local replacement algorithm: a process
selects only from its own frames its
replacement, thus stealing frames from
(n / total frames needed) * m others are not permitted.