You are on page 1of 40

OPERATING SYSTEMS

MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

-----------------------------
------------------------
Lesson 1: Overview of Operating
Systems
-----------------------------
------------------------
Operating systems
An operating system is software that
manages a computer’s hardware.
It also provides a basis for application Operating System Functions:
programs and acts as an intermediary 1. Booting
between the computer user and the 2. Memory Management
computer hardware. 3. Loading and execution
It provides an environment for programs 4. Data Security
to run. 5. Drive/disk management
6. Device Control
It is designed mostly for ease of use, 7. User interface
with some attention paid to performance 8. Process management
and security and none paid to resource
utilization.
Hardware: —the central processing unit Goals of an Operating System
(CPU), the memory, and the input/output  Simplify the execution of user
(I/O) devices—provides the basic programs and make solving
computing resources for the system. problems easier
Application programs: such as word  Use computer hardware
processors, spreadsheets, compilers, and efficiently
web browsers—define the ways in which  Make applications more portable
these resources are used to solve users’ and versatile
computing problems  Provide isolation, security, and
protection among user programs
Users: can be people, machines, or other  Improve overall system reliability
computers

Kernel: core inner component that


processes data at the hardware level,
handles i/o management, memory, and
process management; it is the one
program running all the time
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Shell: outer layer that manages the


interaction between the user and OS. It 3. Distributed: collection of
communicates with the OS by either separate systems networked
taking the input from the user or shell together
script
Moore’s Law: formulated by Gordon 4. Client-Server Computing: a
Moore, co-founder of Intel Corporation, server receives request from
in 1965, states that the number of different clients
transistors on a microchip would double
approximately every two years, while the 5. Peer-to-peer: another
cost per transistor would decrease distributed system, it does not
distinguish clients and servers
Types of Operating Systems
1. Batch: divides and allocates 6. Virtualization: OS natively
similar tasks into batches for compiled for CPU
easier processing
7. Cloud Computing: the underlying
2. Time-sharing or Multitasking: operating systems are managed by
works by allocating to a particular the cloud service providers
task and switching between tasks
frequently --------------------------------
--------------------------
3. Distributed: interconnected
computers communicating with Operating Systems Structures
each other via communication lines
or a shared network --------------------------------
--------------------------
4. Network: enables users to access
Services (LUCIFER PP)
and share files and devices such as
printers, security software, and 1. User Interface
other applications, mostly in a LAN 2. Program Execution
3. I/O Operations
5. Real-time: strict time 4. File System manipulation
requirements 5. Communications
6. Error Detection
6. Mobile: runs exclusively on small 7. Resource allocation
devices 8. Logging
9. Protection & Security
Computing Environments
User and OS Interface
1. Traditional: standalone general
purpose machines 1. Command Line Interface: fetches
a command from user and
2. Mobile: handheld smartphones, executes it, sometimes
tablets, etc.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

implemented in kernel, sometimes execution, communications, background


by systems program services, application programs

2. Graphical User Interface: the


underlying operating systems are
managed by the cloud service
providers Operating Systems Design &
Implementation
3. Touch Screen Interface User goals: OS should be convenient to
use, easy to learn, reliable, safe, and fast
System Calls System goals: OS should be easy to
design, implement, maintain, as well as
Interfaces provided by an operating
flexible, reliable, error-free, and
system that allow user level programs to
efficient
request services or functionality from
the kernel, which is the core part of the Policy: what will be done?
OS
Mechanism: How to do it?
Acts as a bridge between the user level
applications and the low level hardware of *Specifying and designing OS is highly
the computer; a number is associated creative task of software engineering
with each system call
Types of System Calls Operating System Structure
1. File Management: create, delete 1. Simple: not divided into modules;
file written to provide the most
2. Device Management: request, functionality in the least space
release device (eg. MS DOS)

3. Information Maintenance: get, set


time or date
4. Communications: send, receive
5. Protection: allow, deny access

System Programs
Provides a convenient environment for 2. UNIX: beyond simple but not fully
program development and execution, can layered; consists of two parts, the
be divided into: file management, status kernel and the system programs
information, programming language
support, programming loading and
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

OOP, each core component is


separate, each is loadable as
needed with the kernel, e.g. Linux,
Solaris, etc.

3. Layered Structure: OS is divided


into n layers (levels), each built on
top of lower layers. Bottom layer
(0) is the hardware and the
highest layer (N) is the user
interface 6. Hybrid: most OS are actually not
one pure model. Hybrid combines
multiple approaches to address
performance, security, and
usability need, E,g, Apple Mac OS
X hybrid,

Operating System Debugging

Debugging: finding and fixing errors or


bugs, entails generating log files
containing error information
4. Microkernel: OS functionality is
kept as small and minimal as Kernighan’s Law: debugging is twice as
possible; kernel’s responsibility is
hard as writing the code in the first place
to manage communications and
provide essential services. Most
OS features are provided by Performance Tuning: improve
separate modules or processes performance by removing bottlenecks, by
providing means of computing and
displaying measures of system behavior

Operating System Boot: when power is


initialized, execution starts at a fixed
memory location
5. Modular: similar to layers but
more flexible such that it uses
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

----------------------------- *Two processes under the same program


------------------------ are considered separate nonetheless

Lesson 2: Process Management Process State

----------------------------- New: process is created


------------------------ Running: instructions are being executed
Process: a program in execution; needs
Waiting: waiting for an event to occur
certain resources such as CPU time,
memory, files, and I/O devices to Ready: waiting for processor
accomplish its task
Terminated: finished
*A process is the unit of work in most
systems. A system therefore consists of *Only one process can be running on any
a collection of processes running both processor core at any instant. Many may
user and OS code.
be waiting or ready.
Program counter: represents the current
activity of a process Process Control Block

Each process is represented in the OS by


a PCB, also called a task control block.

State: as mentioned earlier

Program counter: where next instruction


address is located

CPU registers: storage

A process in memory is comprised of: CPU scheduling info: priority, scheduling


queues, etc.
Text: executable code; fixed size
Data: global variables; fixed size Memory management info: location of
memory
Heap: memory allocated dynamically
during runtime; expands and narrows Accounting info: CPU time used, process
based on availability with stack numbers, etc.
Stack: storage for function data; must
I/O status info: list of i/o allocated to
not overlap with heap
process
*A program by itself is not a process as a
program is merely a passive entity. It is a Threads: house analogy; if a process is
file containing instructions. Processes are building a house, a thread is fixing the
active entities, with a counter stating the plumbing or installing the lighting. A
next instructions to execute.
thread is a smaller process within a
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

process. A process is composed of


threads.

Process scheduling

Process scheduler: selects an available


process for program execution on a core

*For a system with a single CPU core, no


more than 1 process can run at a time. A Processes are placed in the ready queue
multicore system can run multiple and is assigned to a CPU core. Three
processes at a time. If processes > cores, possibilities occur here:
they will wait until a core is free.
1. I/O request, which will then put it
Degree of multiprogramming: number of into an I/O wait queue
processes currently in memory 2. Creates new child process then
Most processes can be described in two: placed in wait queue while waiting
for child process to end
I/O-bound: does I/O for a longer time 3. Removed forcibly from the core,
CPU-bound: does computations for a due to either an interrupt or time
longer time slice expired

Scheduling queues Once waiting turns to ready (as in


case 1 and 2), it is placed back into
Ready queue: When a process enters a
the ready queue.
system they are put into a ready queue,
awaiting CPU time. It is usually stored in CPU scheduling
a linked list. CPU scheduler: selects from among
Wait queue: processes waiting for the processes in the ready queue and
specific events are put in a wait queue allocates a CPU core to one of them; it
executes at least once every 100 ms

Swapping: a form of scheduling where


a process is removed from memory to
reduce degree of multiprogramming;
later, process can be reintroduced to
memory and then executed
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

*Called swapping because process is 1. Child process is a duplicate of the


swapped from memory to disk. parent, with same program and
data
Context Switch
2. Child process has a new program
State save: saves the current state loaded in it
of the CPU to prepare for possible
Process termination
interrupts
A process is officially terminated once the
State restore: resume operations
final statement is executed and asks the
Context switch: kernel saves the OS to delete it using exit (). All resources
context of the old process in its PCB are deallocated and reclaimed by the OS.
and loads the saved context of the
Reasons for termination:
new process scheduled to run
- Child has exceeded usage of
resources allocated
- Task assigned to child no longer
required
- Parent is exiting and OS does not
allow a child to continue if parent
has been terminated.

Cascading termination: a child process is


Process creation terminated once parent has been
terminated, normally initiated by the OS
Parent process: process creator
*A parent must call wait () to terminate a
Process identifier (pid): used to identify
child process. A zombie process is one
specific processes, usually an integer
where no wait () call is made but a process
*Child processes may either inherit is terminated.
resources from the parent or separately
Interprocess communication
request resources from the OS.
Independent process: does no share
When a process creates a process:
data with any other processes in the
1. Parent continues to execute system
concurrently with its children
Cooperating process: process can affect
2. Parent waits until some or all
or be affected by other processes
children have terminated

The address-space possibilities:


OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

*Process cooperation is important to consume something that has not been


provide information sharing, computation produced yet.
speedup, and modularity.
Types of Buffers
Cooperating processes require an IPC
Unbounded: no limit is set on the size of
mechanism that allows them to exchange
the buffer; consumer may have to wait
data. There are two main models of IPC:
for new items but producer can always
1. Shared memory model: region of produce items
memory shared by cooperating
Bounded: fixed buffer size; consumer
processes is established, thus
will wait if buffer is empty and producer
they can read write data in shared
will wait if its full
region
o Shared memory region
resides in the address
2. Message passing model:
space of the process
communication occurs by means of
creating the shared
messages exchanged bet the
memory segment
cooperating processes; better for
*No process can access another’s distributed systems and smaller
memory. Shared memory requires that amounts of data
both agree to remove this restriction.

Producer-Consumer Paradigm

A producer process produces information


that is consumed by a consumer process;
example: a compiler may produce
assembly code that is consumed by an
assembler, it will then produce object
modules to be consumed by the loader.

To allow both producer and consumer Allows processes to communicate even


processes to run concurrently, shared without shared memory.
memory is used. A buffer is made that
A communication link must exist bet
will be filled up by the producer and will
two communicating processes.
be emptied by the consumer. This buffer
is located in the shared memory region. Establishing links
Both producers and consumers must be
Direct communication: each process
synchronized so a consumer cannot
must explicitly name the recipient or
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

sender of the message, e.g. send (P, Non-blocking receive: receiver


message) or receive (Q, message). retrieves either a valid message or
null
- A link is established automatically
bet every pair of processes that --------------------------------
want to communicate; they only --------------------------
need each other’s identity
Threads
- A link is associate w/ exactly 2
processes --------------------------------
- Exactly one link exists bet each --------------------------
pair of processes
Thread: basic unit of CPU utilization
Indirect communication: messages which comprises of a thread ID, a
are sent to mailboxes or ports, e.g. program counter (PC), a register set, and
send (A, message) meaning to send a stack. It shares with other threads
mailbox A a message, receive (A, belonging to the same process its code,
message) meaning to receive the data, and other OS resources.
message in mailbox A.
*A traditional process has a single thread
- A link is established between a of control. If a process has multiple
pair of processes only if they threads of control, it can perform more
share a mailbox than one task at a time.
- A link may be associated with more
*Most software applications that run on
than two processes
modern PCs are multithreaded.
- Many links may exist with each link
corresponding to one mailbox Ex: MS Word can have a single thread for
word count, another for keystroke
Synchronization
recording, and another for spelling check
Blocking send: sending process is in the background.
blocked until message is received
*Most OS kernels are multithreaded.
Non-blocking send: sending process
Benefits: Responsiveness, resource
sends messages and resumes
sharing, economy, and scalability
operation
Multicore: systems with multiple
Blocking receive: receiver blocks
computing cores on a single processing
until a message is available
chip

Concurrency: allows threads to run in


parallel, separating each thread to a core
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

(given that it is multicore); it supports provides more concurrency than


more than one task by allowing all the the many-to-one model by allowing
tasks to make progress another thread to run when a
thread makes a blocking system
Parallelism: can perform more than one
call.
task simultaneously

*Thus, it’s possible to have concurrency *It allows multiple threads to run
w/o parallelism in parallel on multiprocessors

Types of Parallelism
3. Many-to-many Model:
1. Data Parallelism: focuses on multiplexes many user-level
distributing subsets of the same threads to a smaller or equal
data across multiple computing number of kernel threads
cores and performing the same
operation on each core. E.g. If * Although the many-to-many model
you’re adding the sum of a set 0- appears to be the most flexible of the
10, one core would add 0-5 and models discussed, in practice it is
another will add 6-10. difficult to implement.

2. Task Parallelism: involves Thread Library: provides the


distributing not data but tasks programmer with an API for creating and
(threads) across multiple managing threads.
computing cores. Each thread is
There are two primary ways of
performing a unique operation.
implementing a thread library:

1. Provide a library entirely in user


Multithreading Models
space with no kernel support.
1. Many-to-one Model: maps many
user-level threads to one kernel 2. Implement a kernel-level library
thread. Thread management is supported directly by the
done by the thread library in user operating system.
space, so it is efficient
* Three main thread libraries are in use
today: POSIX Pthreads, Windows, and
*Only one thread can access the
Java.
kernel at a time.
Synchronous Threading: occurs when the
2. One-to-one Model: maps each parent thread creates one or more
user thread to a kernel thread. It children and then must wait for all of its
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

children to terminate before it resumes. of keeping the CPU busy is extended to


Here, the threads created by the parent all processing cores on the system.
perform work concurrently, but the
CPU Burst: process execution begins
parent cannot continue until this work has
with a CPU burst
been completed.
I/O Burst: follows the CPU burst, a CPU
Asynchronous Threading: Once the
burst follows again shortly after, then
parent creates a child thread, the parent
another I/O burst and so on
resumes its execution, so that the parent
and child execute concurrently and CPU Scheduler: selects a process from
independently of one another. the processes in memory that are ready
to execute and allocates the CPU to that
-----------------------------------------------
process.
------------
CPU Scheduling decisions may take place
CPU Scheduling
under the following four circumstances:
-----------------------------------------------
1. When a process switches from the
------------
running state to the waiting state
By switching the CPU among processes, (for example, as the result of an
the operating system can make the I/O request or an invocation of
computer more productive. wait () for the termination of a
child process)
On modern operating systems it is
kernel-level threads—not processes—
2. When a process switches from the
that are in fact being scheduled by the
running state to the ready state
operating system. However, the terms
(for example, when an interrupt
"process scheduling" and "thread
occurs)
scheduling" are often used
interchangeably.
3. When a process switches from the
* In a system with a single CPU core, only waiting state to the ready state
one process can run at a time. (for example, at completion of
I/O)
Multiprogramming: is to have some
process running at all times, to maximize
4. When a process terminates
CPU utilization

Every time one process has to wait,


another process can take over use of the
CPU. On a multicore system, this concept
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Nonpreemptive Scheduling - process that requests the CPU first is


(Cooperative): occurs during allocated the CPU first
circumstance 1 and 4
- FIFO queue, when a process enters the
Preemptive Scheduling: circumstance 2 ready queue, its PCB is linked onto the
and 3 tail of the queue

*All OS use preemptive scheduling. - average waiting time of FCFS is long


Dispatcher: module that gives control of
Gantt Chart: bar chart that illustrates a
the CPU’s core to the process selected by
particular schedule
the CPU scheduler; its functions include
switching context from one process to
Burst Time: how much time CPU can
another, switching to user mode, jumping allocate for a single process
to the proper location in the user
program to resume that program

Scheduling Criteria

CPU utilization: CPU must be as busy as Average Waiting Time: average length
possible of how long a process takes before
running, such that P1 above took 0 ms
Throughput: no. of processes per time waiting, P2 took 24 ms, and P3 took 27
unit ms; (P1 + P2 + Pn)/n
Turnaround time: time from submission
* The FCFS scheduling algorithm is
of a process to its completion
nonpreemptive.
Waiting time: sum of the periods spent
waiting in the ready queue
2) Shortest Job First Scheduling (SJF)
Response time: time from submission of
- this algorithm associates with each
a process to first response produced;
process the length of the process’s next
time it takes to start responding
CPU burst
Scheduling Algorithms
- the shortest burst time goes first

- if there’s a tie, FCFS ensues


1) First-Come, First-Served Scheduling
- most optimal, as it decreases waiting
(FCFS)
time for both short and long processes
- simplest scheduling algorithm
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

- The SJF algorithm can be either process being placed at the end of the
preemptive or nonpreemptive. The choice queue
arises when a new process arrives at the
- although the time quantum should be
ready queue while a previous process is
large compared with the context switch
still executing.
time, it should not be too large. As we
Preemptive SJF: also called shortest- pointed out earlier, if the time quantum is
remaining time first, stops a currently too large, RR scheduling degenerates to
running process if its remaining execution an FCFS policy
time is greater than burst time of
process next in line

Average Waiting Time (SJF): (P1 – AT)


+ (P2 – AT) / N

4) Priority Scheduling

- SJF is considered a special case of


priority scheduling algorithm
3) Round Robin Scheduling
- processes are assigned priority levels,
- similar to FCFS scheduling, but
with the highest priority being the first
preemption is added to enable the system
to run
to switch between processes
- equal-priority processes are scheduled
- a small unit of time, called a time
in
quantum or time slice, is defined,
FCFS order
generally 10-100 milliseconds
- low numbers represent high priority
- ready queue is treated as a circular
queue. The CPU scheduler goes around
the ready queue, allocating the CPU to
each process for a time interval of up to
1-time quantum.

- if burst time exceeds time quantum,


next process is lined up with the stopped
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Preemptive Priority Scheduling: if a new Multiprocessor Scheduling


process comes with a higher priority
Load sharing: multiple threads may run in
level, this shall take over
parallel, becomes possible, however
Nonpreemptive Priority Scheduling: new scheduling issues become correspondingly
task with higher priority is placed in the more complex.
head of the queue
Multiprocessor: referred to systems
Starvation or Indefinite Blocking: low that provided multiple physical
priority processes may never run because processors, where each processor
high priority tasks keep taking over contained one single-core CPU

Aging: gradually increasing priority of Asymmetric Multiprocessing: only one


processes that wait for a long time core accesses the other cores and system
data structures, reducing the need for
Round Robin Priority Scheduling: gives
data sharing.
processes with equal priority a time
quantum that it cycles through Symmetric Multiprocessing (SMP): each
processor is self-scheduling, standard
approach to multiprocessing

Multicore Processor: most contemporary


computer hardware now places multiple
computing cores on the same physical chip

Hyper-threading: multiple hardware


threads are assigned to a single core,
giving an illusion of having separate CPUs
5) Multi-level Scheduling

- different priority levels have different


queues such that priority 0 has its own
queue, 1 has its own queue, so on and so
forth

6) Multi-level Feedback Queue


Scheduling

- processes can move from queue to


queue, such that low priority can become
high priority vice versa
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

instructions that manipulate the code due


to concurrency

Critical Section Problem

Critical Section: a segment of code


inside a process that may be accessing
and updating data that is shared with at
least one other process

*When one process is executing in its


critical section, no other process is
allowed to execute in its critical section.

Critical-section Problem: to design a


protocol that the processes can use to
synchronize their activity so as to
cooperatively share data.

*Each process must request permission to


enter its critical section.

This request is comprised of three


----------------------------- sections: entry, remainder, and exit
------------------------
A solution to the critical-section problem
Lesson 3: Process Coordination must satisfy the following three
----------------------------- requirements:
------------------------
Process Synchronization: coordination of 1. Mutual Exclusion (Atomicity): if
several processes inside a system to process Pi is executing in its
ensure that they run in a planned and critical section, then no other
controlled manner processes can be executing in
their critical sections
Race Condition: several processes access
and manipulate the same data
2. Progress: if no process is in the
concurrently and the outcome of the
critical section, a waiting process
execution depends on the particular
can enter
order in which the access takes place;
both processes may have separated
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

3. Bounded waiting: there's a limit on assumptions regarding the visibility of


how long a process must wait to modifications to memory on a shared-
enter the critical section memory multiprocessor.

Memory Barriers: instructions that


Preemptive Kernels: allows a process to forcefully propagate changes in memory
be preempted while it is running in kernel to all other processors; adding memory
mode barriers before assignments ensure that
no other threads manipulate this variable
Nonpreemptive Kernels: does not allow a
process running in kernel mode to be *Memory barriers are primarily used to
preempted; a kernel-mode process will not confuse variables of their values if
run until it exits kernel mode, blocks, or code is reordered
voluntarily yields control of the CPU.
*Only used by kernel developers when
Peterson’s Solution: restricted to two writing specialized code that ensures
processes that alternate execution mutual exclusion
between their critical sections and
*Summary: In this example, flag acts as a
remainder sections
synchronization point, preventing reader
*Software-based solutions are not from reading shared_var until writer has
guaranteed to work on modern computer set the flag, effectively enforcing a
architectures memory barrier.

HARDWARE SUPPORT FOR


SYNCHRONIZATION

1) Memory Barriers

Memory Models

Strongly ordered: a memory


modification on one processor is
immediately visible to all other
processors

Weakly ordered: where modifications to 2) Hardware Instructions


memory on one processor may not be
Atomic operation: single uninterrupted
immediately visible to other processors
operation
*Memory models vary by processor type,
Below, a test_and_set() function is
so kernel developers cannot make any
defined. If two of these functions run on
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

separate cores, they will be executed sign is changed back to "Vacant" by the
simultaneously, arbitrarily sequencing person inside after they finish using it.
them. Initialized to 0.
* "Test and Set" is a way for threads or
processes to check the state of a shared
variable and set it to a new value if a
certain condition is met.

The compare_and_swap() instruction


(CAS), just like the test_and_set()
instruction, operates on two words
atomically, but uses a different
mechanism that is based on swapping the
content of two words.

*If two CAS instructions are executed


simultaneously (each on a different core),
they will be executed sequentially in some
&lock and target are the same. while(0) arbitrary order.
breaks outs of the loop and proceeds to
the critical section. Here's a simplified explanation:

1. Load: Get the current value of a


Easy Version: shared variable.

Imagine a public restroom with a sign on 2. Compare: Compare the current


the door that says "In Use" or "Vacant." value with an expected value.
"Test and Set" is like a person who
3. Swap: If the current value is equal
checks the sign before entering:
to the expected value, replace it
Test: A person comes to the restroom with a new value. If not, do
door and checks the sign. If the sign says nothing.
"Vacant," they can enter, and at the same
4. Result: The operation returns a
time, they change the sign to "In Use" to
Boolean value: true if the swap was
let others know the restroom is occupied.
successful (the comparison
Set: If someone else comes to the door passed), and false otherwise.
while the sign says "In Use," they see the
sign and know that the restroom is Easy Version:
occupied. They'll have to wait until the
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

CAS, or Compare-And-Swap, is like a MUTEX LOCKS


ticket machine at a bakery. Imagine you
Mutex Lock: the term mutex is short for
want to buy a fresh muffin. You take a
mutual exclusion; used to protect critical
number ticket from the machine, and it
sections and thus prevent race conditions.
shows the current number being served.
That is, a process must acquire the lock
If your number matches the current
before entering a critical section; it
number, you get the muffin, and the
releases the lock when it exits the
machine updates the number to the next
critical section.
one. But if your number doesn't match
because someone else got in before you, * The acquire () function acquires the
you have to wait and try again later. lock, and the release () function releases
the lock

A mutex lock has a boolean variable


3) Atomic Variables
available whose value indicates if the lock
Provides atomic operations on basic data is available or not. If the lock is available,
types such as integers and Booleans. a call to acquire () succeeds, and the lock
Atomic variables can be used in to ensure is then considered unavailable.
mutual exclusion in situations where
there may be a data race on a single
variable while it is being updated, as when
a counter is incremented.

Atomic variables are commonly used in


operating systems as well as concurrent
applications, although their use is often
limited to single updates of shared data
such as counters and sequence generators

*Disadvantages: The hardware-based


solutions to the critical-section problem
presented above are complicated as well
as generally inaccessible to application
programmers. Instead, operating-system
designers build higher-level software Locks are either contended or
tools to solve the critical-section uncontended. A lock is considered
problem. contended if a thread blocks while trying
to acquire the lock. If a lock is available
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

when a thread attempts to acquire it, the


lock is considered uncontended.

Contended locks can experience either


high contention (a relatively large number
of threads attempting to acquire the lock) *When one process modifies the
or low contention (a relatively small semaphore value, no other process can
number of threads attempting to acquire simultaneously modify that same
the lock.) semaphore value.

Busy waiting: disadvantage of using Binary semaphore: its value can range
mutex locks, where a lot of processes only between 0 and 1. Thus, binary
loop continuously to acquire a lock that is semaphores behave similarly to mutex
unavailable locks.

Spinlock: the process “spins” while Counting Semaphore: its value can range
waiting for the lock to become available. over an unrestricted domain; can be used
No context switch is required when a to control access to a given resource
process must wait on a lock, and a consisting of a finite number of
context switch may take considerable instances, can be used to run two
time. processes at the same time

* On modern multicore computing


systems, spinlocks are widely used in
many operating systems.
Process 1 (left) sends a signal synch to
Disadvantage: busy waiting takes a long
process 2 (right), which waits for the
time for processes
given signal
SEMAPHORES * When a process executes the wait ()
operation and finds that the semaphore
Semaphores: an integer variable, S, that, value is not positive, it must wait.
apart from initialization, is accessed only
However, rather than engaging in busy
through two standard atomic operations:
waiting, the process can suspend itself.
wait () and signal (). S is Initialized to 1
Suspend operation: places a process into
a waiting queue associated with the
semaphore, and the state of the process
is switched to the waiting state; a
process that is suspended, waiting on a
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

semaphore S, should be restarted when MONITORS


some other process executes a signal ()
Monitor: is an ADT that includes a set of
operation.
programmer-defined operations that are
provided with mutual exclusion within the
monitor.

*The process is restarted by a wakeup ()


operation.

Each semaphore has an integer value and


a list of processes list. When a process
must wait on a semaphore, it is added to
the list of processes. A signal () operation
removes one process from the list of
waiting processes and awakens that *The local variables of a monitor can be
process. accessed by only the local functions.

Condition construct: contains mechanisms


that allow synchronization schemes for
specific monitor variables, as monitors are
simply not enough; has two, single () and
wait ()

*SMP systems must provide alternative


techniques such as spinlocks or
compare_and_swap () to ensure that wait
() and signal () are performed atomically.

Disadvantage: If a single process


misbehaves, the entire semaphore, or even *Only one process can be active at the
mutex lock, could misbehave. monitor.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Disadvantage: Resource allocation and


accessing; as we are essentially creating -----------------------------------------------
our own data type ------------

SYNCHRONIZATION PROBLEMS

-----------------------------------------------
------------
Liveness
Bounded Buffer Problem: commonly used
Liveness: refers to a set of properties
to illustrate the power of
that a system must satisfy to ensure that synchronization primitives; refer to
processes make progress during their producer-consumer problem
execution life cycle. A process waiting
indefinitely is an example of a “liveness Readers-Writers Problem: if a writer
failure.” and a reader wants to read a database
simultaneously, what will be shown?
An example of a liveness failure are
infinite loops, a busy wait loop can be an First Readers–Writers Problem: requires
example of a possibility of a liveness that no reader be kept waiting unless a
failure. writer has already obtained permission to
use the shared object. In other words, no
Below are situations that can lead to
reader should wait for other readers to
liveness failures.
finish simply because a writer is waiting.
1. Deadlock: when every process in the
set is waiting for an event that can be Second Readers–Writers Problem:
caused only by another process in the set, requires that, once a writer is ready, that
in short when a process is waiting for writer perform its write as soon as
possible. In other words, if a writer is
something that’s not going to happen. The
waiting to access the object, no new
“events” with which we are mainly
readers may start reading.
concerned here are the acquisition and
release of resources such as mutex locks
and semaphores.

2. Priority Inversion: when a higher-


priority process needs to read or modify
kernel data that are currently being
accessed by a lower-priority process—or
a chain of lower-priority processes. It
can occur only in systems with more than
two priorities.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

and then her right chopstick,


Dining Philosophers whereas an even numbered
philosopher picks up her right
Dining Philosophers Problem: a classic chopstick and then her left
synchronization problem neither because chopstick
of its practical importance nor because
computer scientists dislike philosophers
Monitor Solution: This solution imposes
but because it is an example of a large
the restriction that a philosopher may
class of concurrency-control problems. It
pick up her chopsticks only if both of
is a simple representation of the need to
them are available. To code this solution,
allocate several resources among several
we need to distinguish among three
processes in a deadlock-free and
states in which we may find a philosopher.
starvation-free manner.

Semaphore Solution: One simple solution


is to represent each chopstick with a
semaphore. A philosopher tries to grab a
chopstick by executing a wait () operation
on that semaphore. She releases her
It must follow this sequence:
chopsticks by executing the signal ()
operation on the appropriate semaphores.

This could, however, result in a deadlock.

Conditions:
Synchronization in Windows: mutexes,
 Allow at most four philosophers to spinlocks, and semaphores are utilized.
be sitting simultaneously at the
table. Synchronization in Linux: The Linux
kernel is preemptive. Linux uses atomic
 Allow a philosopher to pick up her integers for synchronization. It also uses
chopsticks only if both chopsticks spinlocks, mutexes, and semaphores.
are available (to do this, she must
pick them up in a critical section). Synchronization in Java: known for
monitors, reentry locks, semaphores, and
 Use an asymmetric solution—that condition variables.
is, an odd-numbered philosopher
picks up first her left chopstick -----------------------------------------------
------------
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

DEADLOCKS 3. Release. The thread releases the


resource. System call.
-----------------------------------------------
------------ Livelock: occurs when a thread
Deadlock: A thread requests resources; continuously attempts an action that
if the resources are not available at that fails, much like bumping into someone in a
time, the thread enters a waiting state. hallway; another example of a liveness
Sometimes, a waiting thread can never failure
again change state, because the
resources it has requested are held by *Livelock typically occurs when threads
other waiting threads. retry failing operations at the same time.
It thus can generally be avoided by having
The various synchronization tools each thread retry the failing operation at
discussed in Chapter 6, such as mutex random times.
locks and semaphores, are also system
resources; and on contemporary computer Conditions of Deadlock
systems, they are the most common
sources of deadlock. 1) Mutual exclusion: only one thread at a
time can use a resource, if another
*A thread must request a resource thread requests a used resource, they
before using it and must release the must be delayed
resource after using it. A thread may
request as many resources as it requires 2) Hold and wait: a thread holding a
to carry out its designated task. resource which needs another must hold
that resource and wait for the other to
Sequence for utilizing a resource: become available

1. Request. The thread requests the 3) No preemption: a resource can be


resource. If the request cannot be released only voluntarily by the thread
granted immediately (for example, if a holding it, after that thread has
mutex lock is currently held by another completed its task
thread), then the requesting thread must
wait until it can acquire the resource. 4) Circular Wait: T0 is waiting for T1, T1
System call. is waiting for T2, …, Tn is waiting for T0

2. Use. The thread can operate on the Resource Allocation Graph


resource (for example, if the resource is
a mutex lock, the thread can access its This graph consists of a set of vertices V
critical section). and a set of edges E.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Requested Edge: A directed edge from


thread Ti to resource type Rj is denoted
by Ti → Rj

Assignment Edge: A directed edge from


resource type Rj to thread Ti is denoted
by Rj → Ti; it signifies that an instance of
resource type Rj has been allocated to
thread Ti.

*Threads are circles and edges are


rectangles. When a request is available, it
No deadlock here, since T4 can let go of
will instantaneously turn to an assignment
R2.
edge.

In summary, if a resource-allocation
*If the graph contains no cycles, then no
graph does not have a cycle, then the
thread in the system is deadlocked. If
system is not in a deadlocked state. If
the graph does contain a cycle, then a
there is a cycle, then the system may or
deadlock may exist.
may not be in a deadlocked state.
* If each resource type has several
Methods for Handling Deadlocks
instances, then a cycle does not
necessarily imply that a deadlock has
Three main methods: ignore deadlocks
occurred. In this case, a cycle in the
happen, ensure deadlock never occurs,
graph is a necessary but not a sufficient
allow deadlocks but fix them
condition for the existence of deadlock.

*Most operating systems ignore


deadlocks, which forces app developers or
kernel programmers to fix it. Databases
use the third.
Deadlock prevention: provides a set of
methods to ensure that at least one of the
necessary conditions

Deadlock avoidance: requires that the


operating system be given additional
information in advance concerning which
resources a thread will request and use
Above is an example of deadlock, with
two cycles.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

during its lifetime. With this, the OS can in semaphores and mutexes, which are
decide if a thread must wait. the main causes of deadlocks.

*Ignoring the possibility of deadlocks is *The solutions above are generally


cheaper than the other approaches. Since impractical, with the next one typically
in many systems, deadlocks occur considered the best.
infrequently (say, once per month), the
extra expense of the other methods may Circular Wait: resources are defined
not seem worthwhile. priority levels to ensure no deadlock
occurs. Resources are released in the
DEADLOCK PREVENTION reverse order as they are acquired, such
that if P1 acquired A, then it should
By removing at least one condition above, release C, then B, and finally A.
deadlock does not occur.
DEADLOCK AVOIDANCE
Mutual Exclusion: Providing shareable
resource ensures no mutual exclusion is The simplest and most useful model
needed. Examples of these are read-only requires that each thread declare the
files, where multiple programs can use it maximum number of resources of each
at the same time. type that it may need.

Hold and Wait: we must guarantee that, Allocation State: The resource-
whenever a thread requests a resource, it allocation state is defined by the number
does not hold any other resources. A of available and allocated resources and
protocol says that all resources needed the maximum demands of the threads.
by a thread must be stated before
execution, which is impractical due to
dynamic allocation. Another says that a Deadlock Avoidance Algorithms
thread can only request a resource when
it has none. Safe State: if the system can allocate
resources to each thread (up to its
No Preemption: if a thread is waiting on maximum) in some order and still avoid a
a resource, the resource it already has is deadlock. More formally, a system is in a
released and it is placed into waiting. It safe state only if there exists a safe
will only restart if it has both the sequence. If no sequence exists, then the
resource that they need. state is unsafe.

*Can be used in those that can be saved An unsafe state may lead to a deadlock. A
and restored easily. It cannot be applied safe state is not a deadlocked state. A
deadlocked state is an unsafe state.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

*The idea is simply to ensure that the


system will always remain in a safe state. Banker’s Algorithm: The resource-
Initially, the system is in a safe state. allocation-graph algorithm is not
Whenever a thread requests a resource applicable to a resource allocation system
that is currently available, the system with multiple instances of each resource
must decide whether the resource can be type. This algorithm is unfortunately not
allocated immediately or the thread must as efficient as the earlier algorithm.
wait.
The name was chosen because the
Resource Allocation Graph Algorithm: algorithm could be used in a banking
similar to the graph above, but a new system to ensure that the bank never
concept is introduced, claim edges, which allocated its available cash in such a way
are arrows that show interest on a that it could no longer satisfy the needs
resource in advanced (represented by a of all its customers
dashed line)
When a new thread enters the system, it
must declare the maximum number of
instances of each resource type that it
may need. This number may not exceed
the total number of resources in the
system. When a user requests a set of
resources, the system must determine
whether the allocation of these
resources will leave the system in a safe
state. If it will, the resources are
allocated; otherwise, the thread must
Above is in the safe state
wait until some other thread releases
enough resources.

Available: A vector of length m indicates


the number of available resources of each
type. If Available[j] equals k, then k
instances of resource type Rj are
available.

Above is in the unsafe state, a cycle


forms
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Max: An n × m matrix defines the


maximum demand of each thread. If
Max[i][j] equals k, then thread Ti may Compare available with need, see which
request at most k instances of resource process can accept the available
type Rj. resources such that Available > Need:

Allocation: An n × m matrix defines the T0 is unsafe as Available < Need.


number of resources of each type T1 is safe as Available > Need
currently allocated to each thread. If
Allocation[i][j] equals k, then thread Ti is
currently allocated k instances of
resource type Rj.

Need: An n × m matrix indicates the


remaining resource need of each thread.
If Need[i][j] equals k, then thread Ti may
need k more instances of resource type Rj T1 shall go first. Available will now be
to complete its task. Note that Need[i][j] updated to Available = Available +
equals Max[i][j] −Allocation[i][j]. Allocation, since the thread is done thus
Specifies the all allocated resource shall be returned.
additional resources that thread Ti may
still request to complete its task.

Example: Resources A, B, and C has 10, 5,


and 7 instances respectively along with
this table:

Process repeats till all threads have run.

Since need = max – allocation:


OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

and is fundamental to having multiple


Final safe sequence: <1,3,4,0,2> processes loaded in memory for
concurrent execution.

-----------------------------
------------------------
Lesson 3: Memory Management
-----------------------------
------------------------

-----------------------------------------------
Base register: holds the smallest legal
------------ physical memory address
Main Memory Limit register: specifies the size of the
range
-----------------------------------------------
------------ For example, if the base register holds
300040 and the limit register is 120900,
Memory: consists of a large array of then the program can legally access all
bytes, each with its own address. addresses from 300040 through 420939
*Main memory and the registers built (inclusive).
into each processing core are the only
general-purpose storage that the CPU can *If a user process accesses a memory
access directly. address that is not theirs, an error
*Data (especially external ones, out of occurs.
the CPU) must be taken to the CPU for it
to be processed. *Context switches involve saving the
current process's state from registers
Stalling: registers that are built into
each CPU core are generally accessible into main memory and loading the next
within one cycle of the CPU clock; process's state from main memory into
therefore, a processor may need multiple the registers.
cycles to complete an operation thus
stalling is necessary Address Binding
Cache: solution for the aforementioned
problem, a fast memory between the CPU Addresses in the source program are
and main memory, typically on the CPU generally symbolic (such as the variable
chip count). A compiler typically binds these
symbolic addresses to relocatable
*Separate per-process memory space
protects the processes from each other
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

addresses (such as “14 bytes from the address, a virtual address is used. Logical
beginning of this module”). and virtual address are similar terms.

The linker (or loader) in turn binds the Logical address space: set of all logical
relocatable addresses to absolute addresses generated by a program
addresses such as 740169. Each binding is
a mapping from one address space to Physical address space: set of all physical
another. In short there is translation from addresses corresponding to the logical
user to hardware. addresses

The binding process may occur at any of Memory management unit (MMU): a
these stages: hardware device that maps virtual memory
during runtime to physical addresses
Compile time: if during compile time the
memory address is known, absolute code Relocation register: another term for
(code ready to be ran by the CPU) can be base register
generated. So, if location is already known,
the compiler will just comply. *The user program never accesses the
real physical addresses. It may create a
Load time: not known at compile time pointer, store it in memory, manipulate it,
where the process will reside in memory, and compare. Thus, the user program
then the compiler must generate deals with logical addresses.
relocatable code. In this case, final
binding is delayed until load time. *The memory mapping hardware converts
logical to physical.
Execution time: When a process can be
moved from one memory segment to
another during its execution, the binding Dynamic Loading: a routine is not loaded
of memory addresses must be delayed until it is called or needed.
until runtime.
Dynamically linked libraries (DLLs): are
system libraries that are linked to user
Logical address: address generated by programs when the programs are run.
CPU These libraries can be shared among
Physical address: address seen by the multiple processes so that only one
memory unit and loaded into the memory instance of the DLL in main memory. It is
address register of the memory also called shared libraries.

Addresses under execution time differs Static linking: linker combines all
from others such that instead of a logical necessary program modules into one
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

single executable program so there is no


runtime dependency.

Dynamic linking: linking, to libraries, is


postponed until execution time
If the system is does not have enough
space for a process, it can be rejected
Contiguous Memory Allocation: each and show an error message. Alternatively,
process is contained in a single section of it can also be placed in a waiting queue.
memory that is contiguous to the section
containing the next process. When a process terminates, it releases
its block of memory, which is then placed
*Memory is usually divided into two: OS back in the set of holes. If the new hole
and for the user processes is adjacent to other holes, these adjacent
holes are merged to form one larger hole.
Memory Protection: relocation and limit
registers are used to check if the correct *The memory blocks available comprise a
data is being loaded set of holes of various sizes scattered
throughout memory.

Dynamic memory allocation problem:


concerns how to satisfy a request of size
n from a list of free holes. The solutions
are as follows:

First fit: allocate the first hole big


enough. Starting can start either at the
beginning of the set of holes or at the
Memory Allocation
location where the first fit search ended.
Searching is stopped as soon as free hole
Variable partition scheme: the operating
large enough is found.
system keeps a table indicating which
parts of memory are available and which
Best fit: allocate the smallest hole that
are occupied; processes are assigned to
is big enough. Entire list must be
variably sized partitions in memory
searched. This produces the smallest
leftover hole.
Hole: a portion of memory that is unused
or unallocated
Worst fit: allocate the largest hole.
Entire list is searched unless it is sorted
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

by size. Produces the largest leftover to be noncontiguous. This method


hole which leaves room for extra memory. removes the need for compaction and
external fragmentation.
External Fragmentation: exists when
there is enough total memory space to Frames: fixed size physical memory
satisfy a request but the available spaces blocks
are not contiguous: storage is fragmented
into a large number of small holes. Best Pages: same size as frame logical memory
and first fit strategy suffers from this. blocks

*When a process is executed, its pages


are loaded into any available memory
frames from their source (a file system
or a backing store). The backing store is
divided into fixed-size blocks that are
50-percent rule: one third of memory the same size as memory frames or
becomes unusable, as given N allocated clusters of multiple frames.
blocks, 0.5N blocks is lost to
fragmentation. Every address in the CPU is divided into
two parts:
Compaction: a solution to external
fragmentation, shuffling memory
contents so as to place ass free memory
together in one large block. This is not
always possible, especially if relocation is
static and is done at assembly or load
time.

Internal Fragmentation: extra space


within a partition
Page number: used as an index into a
per-process page table

Page table: contains the base address of


each frame in physical memory

Page offset: exact location in the frame


Paging: most common memory being referenced.
management technique; a scheme that
permits a process’ physical address space
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Page number is converted inside a page


table but the offset remains throughout.

Steps taken by MMU to translate


logical addresses to physical addresses:

1. Extract page number p and use it as an


index to page table

2. Extract corresponding frame number f


from page table
p is an index into the page table and d is
3. Replace page number p in logical the displacement within the page.
address with f

*The page size, like the frame size, is


defined by the hardware. The page size is
a power of 2, varying between 4 KB and
1GB per page. If the size of the logical
address space is 2m and a page size is 2n
bytes, then the high order m-n bits of a
logical address designates the page
number, and the n low order bits
designate the page offset.

So, a logical address generally looks like:

Logical address 0 contains the data “a”,


and is located on page 0, offset 0. Page 0
points to frame number 5. Each slot in
the physical memory has 4 bits. Thus,
physical address = 4f + d. Thus, logical
address 0 is equal to physical address 20.

Logical address 6 contains the data “g”,


and is located on page 1, offset 2. Page 1
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

points to frame number 6. Each slot in


the physical memory has 4 bits. Thus, Frame table: contains information such
physical address = 4f + d. Thus, logical as which frames are available, total
address 0 is equal to physical address 26. frames, and so on. It has one entry for
each physical page frame, indicating
*Today, page sizes are about 4-8kb in whether it is free or allocated and if it is,
size. Windows 10 support 4KB to 2MB to which page of which process.
pages.
Valid-invalid bit: within each page table,
* Frequently, on a 32-bit CPU, each page used to protect each frame by restricting
table entry is 4 bytes long, but that size to access to a page. When a bit is valid,
can vary as well. A 32-bit entry can point the associated page in the process’ logical
to one of 232 physical page frames. If address space is a legal page. When it is
the frame size is 4 KB (212), then a invalid, the page is not in the process’
system with 4-byte entries can address logical space.
244 bytes (or 16TB) of physical memory.

*When a process arrives in the system to


be executed, its size, expressed in pages,
is examined. Each page needs one frame.

*An important aspect is to clear Addresses for Pages 6 and 7 cannot be


separation between the programmer’s generated by the process, as it could
view of memory belong to another process.
(one single space) and the actual physical
memory (user memory is in fact Shared pages: reentrant code is shared
scattered). among processes in pages to save space

*A user process is unable to access Reentrant code: code that can be


memory it does not own and outside of its shared. It is a non-self-modifying code
page table, so a page table includes only which never changes during execution.
those pages that the process owns. Since this code never changes, instead of
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

having a copy of the library need for each


process, we can have a sharing of code. *The virtual page number (vpn) in the
The OS must ensure that read-only virtual address is hashed into the hash
access is implemented. table. The vpn is compared with field 1 in
the first element in the linked list. If a
match is found, the corresponding page
Structuring the Page Table frame is used to form the desired
physical address. If no match is found,
1) Hierarchical Paging: dividing the page following entries in the linked list are
table into smaller pieces searched for a matching virtual number
as seen below.
Two level paging algorithm: the page
table is paged as well therefore two page
numbers are recorded, one for the outer
and an offset within that table. Also
called forward-mapped page table

Collision: different keys may produce the


same index
A three-level paging algorithm would not
be enough, thus you can create another 3) Inverted Page Tables: has one entry
page table holding those data and so on. for each real page of memory. Each entry
This is too much work for the OS so consists of the virtual address of the
hierarchical paging in 64-bit page stored in that real memory location,
architectures are considered with information about the process that
inappropriate. owns the page. Only a single page table is
in the system, with each entry on the
2) Hashed Page Tables: used in handling page containing an address space
address spaces larger than 32 bits, with identifier.
the hash value being the virtual page
number.
Each entry in the hash table contains a
linked list of elements that hash to the
same location. Each element contains
three fields: virtual page number,
mapped page frame value¸ and a pointer
to the next element in the linked list.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Standard swapping involves moving entire


processes between main memory and a
backing store.

Paging: refers to swapping using paging

Page out: from memory to backing store


Page in: from backing store to memory

Storing the address-space identifier


ensures that a logical page for a
particular process is mapped to the
corresponding physical page frame.

Each inverted page-table entry is a pair


<process-id, page-number> where the
process-id assumes the role of the
address-space identifier. Instead of f,
the entry i, which is the result of
matching with the earlier pair, is utilized In mobile systems, swapping is generally
along with offset d. If no match is found, frowned upon due to the limited storage
then an illegal access has been provided. Instead, Apple’s iOS asks an
attempted. application if it can voluntarily relinquish
its allocated memory. Modified data is
never removed.
Swapping: a process, or a portion of a
process, can be swapped temporarily out *Swapping pages is more efficient than
of memory to a backing store and then swapping entire processes.
brought back into memory for continued
execution Segmentation: Similar to paging,
segmentation divides memory into
Swapping makes it possible for the total variable-sized parts, but it does so with
physical address space of all processes to two key details: the base and the limit.
exceed the real physical memory of the Thus, within a process, each segment is
system, thus increasing the degree of defined by a starting point (base) and a
multiprogramming in a system. size limit. This approach offers more
flexibility in memory management
compared to fixed-size pages, allowing
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

different parts of a program or data to Sparse address space: virtual address


be allocated to segments of different spaces that has holes in it.
sizes.
*Page sharing is possible through virtual
----------------------------------------------- memory; such as libraries used by
------------ processes

Virtual Memory Demand paging: only loading pages as


----------------------------------------------- needed, as when you run a program, you
------------ don’t need all of its processes. Pages are
loaded only when they are demanded
during program execution; pages that are
Virtual memory: involves the separation never accessed are thus never loaded into
of logical memory as perceived by physical memory
developers from physical memory. Virtual
memory makes the task of programming
much easier, because the programmer no
longer needs to worry about the amount
of physical memory available; she can
concentrate instead on programming the
problem that is to be solved.

Virtual address space: refers to the


logical (or virtual) view of how a process
is stored in memory. Typically, this view is
that a process begins at a certain logical
address—say, address 0—and exists in
contiguous memory.

Valid: associated page is both legal and in


memory

Invalid: not in the logical address space


or is but is not found in the secondary
storage

*The page-table entry for a page that is


not currently in memory is simply marked
invalid.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Page fault: accessing an invalid marked now access the page as though it had
page; thus the OS has not loaded the always been in memory
desired page into memory
Page replacement: If no frame is free,
we find one that is not currently being
used and free it. Occurs during a page
fault where no free frame is available to
store from logical memory

1. Find the location of the desired page on


secondary storage.

2. Find a free frame:

a. If there is a free frame, use it.

1. We check an internal table (usually b. If there is no free frame, use a


kept with the process control block) for page-replacement algorithm to
this process to determine whether the select a victim frame.
reference was a valid or an invalid
memory access. c. Write the victim frame to
2. If the reference was invalid, we secondary storage (if necessary);
terminate the process. If it was valid but change the page and frame tables
we have not yet brought in that page, we accordingly.
now page it in.
3. Read the desired page into the newly
3. We find a free frame (by taking one freed frame; change the page and frame
from the free-frame list, for example). tables.

4. We schedule a secondary storage 4. Continue the process from where the


operation to read the desired page into page fault occurred
the newly allocated frame.
Modify bit: states if a page has been
5. When the storage read is complete, we modified.
modify the internal table kept with the
process and the page table to indicate *Page replacement is basic to demand
that the page is now in memory. paging. It completes the separation
between logical memory and physical
6. We restart the instruction that was memory
interrupted by the trap. The process can
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

FIFO PAGE REPLACEMENT: associates *Optimal page-replacement algorithm


with each page the time when that page looking backward in time, rather than
was brought into memory; the oldest forward.
value is replaced, first that goes in is the
first that goes out

This is considered a stack algorithm, one


Belady’s anomaly: for some page- that never exhibit Bellady’s anomaly
replacement algorithms, the page-fault
rate may increase as the number of LRU APPROXIMATION PAGE
allocated frames increases. REPLACEMENT: since not many systems
provide hardware support for LRU, the
OPTIMAL PAGE REPLACEMENT: following are considered the closest to it
Replace the page that will not be used for
the longest period of time. Reference bit: used for whenever a page
is referenced, either read or a write to
any byte in the page.

Additional Reference Bits Algorithm:


reference bits are recorded (usually
7 is not used till the 18th string, so 7 will consist of 8 bits, such that 00000000
be replaced. 1 is replaced because it is has not been used for eight time periods).
needed much later than 2 and 0. The page with the lowest number is the
LRU page, and it can be replaced.
*The optimal page-replacement algorithm
is difficult to implement, because it Second Chance Page-Replacement
requires future knowledge of the Algorithm: is a FIFO replacement
reference string algorithm. When a page has been
selected, however, we inspect its
LRU PAGE REPLACEMENT: least reference bit. If the value is 0, we
recently used (LRU) algorithm, associates proceed to replace this page; but if the
with each page the time of that page’s reference bit is set to 1, we give the page
last use. When a page must be replaced, a second chance and move on to select
LRU chooses the page that has not been the next FIFO page.
used for the longest period of time.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

Allocation of Frames
How do we allocate the fixed amount of
free memory among the various
processes? If we have 93 free frames
and two processes, how many frames does
each process get?

Minimum Number of Frames: defined by


the computer architecture;

ALLOCATION ALGORITHMS:
Equal allocation: split m frames among n
processes to give everyone an equal m/n
frames. If 93 frames are shared among 5 Thrashing: A process is thrashing if it is
processes, each will get 18, with the spending more time paging than
extra 3 used as a free-frame buffer pool. executing. As you might expect, thrashing
results in severe performance problems
Proportional allocation: available memory
allocated to each process depends on its Causes of thrashing: As the degree of
size. With proportional allocation, we multi
would split 62 frames between two programming increases, CPU utilization
processes, one of 10 pages and one of 127 also increases, although more slowly,
pages, by allocating 4 frames and 57 until a maximum is reached.
frames, respectively, since
Local replacement algorithm: a process
selects only from its own frames its
replacement, thus stealing frames from
(n / total frames needed) * m others are not permitted.

*Like weighted computation Locality model: determines how many


frames a process is actually using; pages
Non-Uniform Memory Access: On these transfer from locality to locality
systems, a given CPU can access some (contiguous frames) depending on the
sections of main memory faster than it usage of these pages
can access others, as seen in the image
below, each CPU can access its own
memory faster than it can others.
OPERATING SYSTEMS
MIDTERMS REVIEWER
Para sa mga tulad kong di kayang makinig sa class :D

You might also like