You are on page 1of 18

01

I/O
Buffering
GROUP
Operating Systems
1
BUFFERING

Is a technique that smoothens out peaks in


I/O demands. However, no amount of
buffering will allow an I/O device to keep
pace with a process indefinitely when the
average demand of the process is greater
than the I/O device can service (Stallings,
2018). In order to improve the write
performance, data may be merged in a buffer
before being sent to the next level. This
increases the I/O size and operation
efficiency (Gregg, 2021)
The following are the major reasons why buffering is
performed (Silberschatz, Galvin, & Gagne, 2018):

1.To cope with a speed mismatch between the


producer and consumer of the data stream;

2.To provide adaptations for devices that


contain different data-transfer sizes; and

3.To support copy semantics for application I/O.


Copy semantics guarantees the version of the data
written to the disk is the same as the version at the
time of the application system call, independent of
any subsequent changes in the application's buffer.
05

| December 2020
Advertising Presentation
Single
Buffer
. This is the simplest type of
support an operating system can
provide.
When a user process issues an
I/O request, the OS assigns a
buffer in the system portion of
the main memory to the
operation.
FOR BLOCK-ORIENTED DEVICES: INPUT TRANSFERS
ARE MADE TO THE SYSTEM BUFFER. UPON COMPLETING
THE TRANSFER, THE PROCESS MOVES THE BLOCK INTO
USER SPACE AND IMMEDIATELY REQUESTS ANOTHER
BLOCK. THIS PROCESS CAN BE TERMED AS READING
AHEAD (ANTICIPATED INPUT), WHICH IS DONE IN
ANTICIPATION THAT THE BLOCK WILL EVENTUALLY BE
USED

• FOR STREAM-ORIENTED DEVICES: SINGLE BUFFERING


CAN BE USED IN A LINE-AT-ATIME OPERATION WHICH
IS APPROPRIATE FOR SCROLL-MODE TERMINALS, OR IN
A BYTEAT-A-TIME OPERATION WHICH IS USED ON
FORMS- MODE TERMINALS WHERE EACH KEYSTROKE
IS SIGNIFICANT.
07
Double
Buffer
This involves the assignment of two (2) system
buffers to an operation. A process transfers data
to (or from) one (1) buffer while the OS empties
(or fills) the other buffer. This technique is also
known as double buffering or buffer swapping.
The details of the actual process may vary
depending on the manner of data handling
(block-oriented or stream- oriented) of the
device
Circular Buffer
. Utilizing double buffer may be inadequate if the
process performs rapid busts of I/O. In this case,
using more than two (2) buffers can alleviate the
inadequacy. A collection of buffers are called a
circular buffer, where each individual buffer is
treated as one (1) unit.
06

Disk Scheduling and


Cache
WHEN THE DISK DRIVE IS OPERATING, THE DISK IS
ROTATING AT A CONSTANT SPEED. IN ORDER TO READ OR
WRITE, THE HEAD MUST BE POSITIONED AT THE DESIRED
TRACK AND AT THE BEGINNING OF THE DESIRED SECTOR
ON THE TRACK. TRACK SELECTION IS PERFORMED BY
MOVING THE HEAD IN A MOVING-HEAD SYSTEM OR BY
ELECTRONICALLY SELECTING ONE HEAD ON A FIXED-
HEAD SYSTEM. THE DISK PERFORMANCE PARAMETERS
BELOW ARE INVOLVED IN THE ACTUAL DISK I/O
OPERATIONS (STALLINGS, 2018)
• SEEK TIME – THIS IS THE TIME REQUIRED TO MOVE THE DISK ARM
TO THE REQUIRED TRACK. IT IS COMPOSED OF TWO (2)
COMPONENTS: THE STARTUP TIME AND THE TIME NEEDED TO
TRAVERSE THE TRACKS (NON-LINEAR FUNCTION). A TYPICAL
AVERAGE SEEK TIME ON A CONTEMPORARY HARD DISK IS UNDER 10
MILLISECONDS.

•ROTATIONAL DELAY – THIS IS THE TIME REQUIRED FOR THE


ADDRESSED AREA OF THE DISK TO ROTATE INTO A POSITION WHERE
IT IS ACCESSIBLE BY THE READ/WRITE HEAD. THIS IS ALSO KNOWN
AS ROTATIONAL LATENCY.

• TRANSFER TIME – THIS DEPENDS ON THE ROTATION SPEED OF THE


DISK, WHICH IS EQUAL TO THE NUMBER OF BYTES TO BE
TRANSFERRED
(B) DIVIDED BY THE PRODUCT OF THE ROTATION SPEED (R) AND
THE NUMBER OF BYTES ON A TRACK (N).FORMULA: T = B/(RN)
DISK
SCHEDULING
POLICIES
THE MOST COMMON REASON FOR THE DIFFERENCES IN
PERFORMANCE CAN BE LINKED TO THE SEEK TIME. IF A
SECTOR ACCESS REQUESTS INVOLVE RANDOM TRACK
SELECTION, THEN THE PERFORMANCE OF THE DISK I/O
SYSTEM WILL BE POOR. HENCE, THE AVERAGE TIME
SPENT IN MOVING THE DISK ARM TO THE REQUIRED
TRACK MUST BE REDUCED. WITH THIS, VARIOUS DISK
SCHEDULING POLICIES, SOMETIMES REFERRED TO AS
DISK SCHEDULING ALGORITHMS, ARE DEVELOPED.
THE PROCESS SELECTION OF THE DISK SCHEDULING POLICIES BELOW
IS BASED ON THE ATTRIBUTES OF THE QUEUE OR THE REQUESTOR
(STALLINGS, 2018):

FIRST-IN FIRST-OUT LAST-IN FIRST-OUT • PRIORITY


(FIFO) (LIFO)
THIS IS THE SIMPLEST FORM OF IN THIS SCHEDULING POLICY, THE
IN TRANSACTION-PROCESSING
SCHEDULING POLICY THAT PROCESSES CONTROL OF THE SCHEDULING IS
ITEMS FROM THE QUEUE IN SEQUENTIAL SYSTEMS, GIVING THE DEVICE TO OUTSIDE THE CONTROL OF THE DISK
ORDER. SINCE EVERY REQUEST IS THE MOST RECENT USER SHOULD MANAGEMENT SOFTWARE, WHICH IS
ACKNOWLEDGED, THIS TECHNIQUE RESULT IN LITTLE OR NO ARM NOT INTENDED TO OPTIMIZE DISK
GENERALLY ENCOMPASSES FAIRNESS MOVEMENT FOR MOVING UTILIZATION BUT TO MEET OTHER
BETWEEN PROCESSES. IF THERE ARE ONLY OBJECTIVES WITHIN THE OS. IN MOST
THROUGH A SEQUENTIAL FILE.
FEW PROCESSES THAT REQUIRE DISK CASES, BATCH AND INTERACTIVE JOBS
ACCESS AND MANY OF THE REQUESTS ARE
THIS LOCALITY CAN ACTUALLY ARE GIVEN HIGHER PRIORITY THAN
CLUSTERED TO ACCESS FILE SECTORS, IMPROVE THROUGHPUT AND MAY JOBS THAT REQUIRE LONGER
THEN A GOOD PERFORMANCE CAN BE REDUCE QUEUE LENGTHS. COMPUTATIONS. THIS ALLOWS SHORT
EXPECTED. HOWEVER, IF THERE ARE HOWEVER, IF THE DISK IS KEPT JOBS TO BE FLUSHED OUT OF THE
MANY PROCESSES COMPETING FOR THE SYSTEM RESULTING IN A GOOD
BUSY BECAUSE OF A LARGE
DISK, THIS TECHNIQUE WILL THEN RESPONSE TIME. HOWEVER, LONGER
WORKLOAD, THERE IS A HIGH
PERFORM RANDOM SCHEDULING THAT JOBS MAY HAVE TO WAIT FOR
RESULTS IN POOR PERFORMANCE. THUS, POSSIBILITY OF STARVATION. EXCESSIVELY LONG TIMES. THIS TYPE
SELECTING A MORE SOPHISTICATED OF SCHEDULING POLICY TENDS TO BE
SCHEDULING POLICY IS HIGHLY POOR IN PERFORMANCE FOR
SUGGESTED. DATABASE SYSTEMS.
THE PROCESS SELECTION OF THE FOLLOWING DISK SCHEDULING
POLICIES IS IN ACCORDANCE WITH THE REQUESTED ITEM

SCAN
SHORTEST SERVICE TIME CIRCULAR SCAN (C-SCAN)
FIRST (SSTF) THIS IS ALSO KNOWN AS THE
ELEVATOR ALGORITHM BECAUSE IT THIS POLICY RESTRICTS
THIS SCHEDULING POLICY SELECTS OPERATES MUCH LIKE AN SCANNING TO ONE (1)
THE DISK I/O REQUEST THAT ELEVATOR. WITH THIS POLICY, THE DIRECTION ONLY. WHEN THE
REQUIRES THE LEAST MOVEMENT ARM IS REQUIRED TO MOVE IN ONE LAST TRACK HAS BEEN VISITED
OF THE DISK ARM FROM ITS (1) DIRECTION ONLY, SATISFYING ALL IN ONE DIRECTION, THE ARM IS
CURRENT POSITION. HENCE THE OUTSTANDING REQUESTS IN ROUTE,
RETURNED TO THE OPPOSITE
SELECTION OF REQUESTS THAT UNTIL IT REACHES THE LAST TRACK
END OF THE DISK AND THE
HOLD MINIMAL SEEK TIME. ON THE IN THE DIRECTION OR UNTIL THERE
ARE NO MORE REQUESTS IN THAT SCAN BEGINS AGAIN IN THE
OTHER HAND, THIS DOES NOT
DIRECTION. THEN, THE SERVICE SAME DIRECTION. THIS
GUARANTEE A MINIMAL AVERAGE
DIRECTION IS REVERSED AND THE REDUCES THE MAXIMUM DELAY
SEEK TIME. HOWEVER, THIS POLICY
SCAN PROCEEDS IN THE OPPOSITE EXPERIENCED BY NEW
STILL PROVIDES BETTER
DIRECTION, PICKING UP ALL REQUESTS
PERFORMANCE THAN FIFO.
REQUESTS IN ORDER.
N-STEP SCAN FSCAN

THIS SCHEDULING POLICY THIS POLICY UTILIZES TWO


SEGMENTS THE DISK (2) SUB-QUEUES. INITIALLY,
REQUEST QUEUE INTO SUB- WHEN THE SCAN BEGINS,
QUEUES OF LENGTH N. SUB- ALL REQUESTS ARE PLACED
QUEUES ARE PROCESSED IN ONE OF THE SUB-
ONE QUEUES, WHILE THE OTHER
(1) AT A TIME, USING SCAN. SUB- QUEUE IS EMPTY.
WHILE A QUEUE IS BEING DURING THE SCAN, ALL
PROCESSED, NEW REQUESTS NEW REQUESTS ARE
ARE ADDED TO THE OTHER PLACED INTO THE EMPTY
QUEUE. IF FEWER THAN N SUB- QUEUE. THUS, THE
REQUESTS ARE AVAILABLE SERVICE OF NEW
AT THE END OF A SCAN, REQUESTS IS DEFERRED
THEN ALL OF THEM ARE UNTIL ALL OF THE INITIAL
PROCESSED IN THE NEXT OR OLD REQUESTS HAVE
SCAN. BEEN PROCESSED.
Redundant array
of independent
disks(RAID)
IS A STANDARDIZED SCHEME FOR MULTIPLE-DISK DATABASE
DESIGN THAT IS COMPOSED OF SEVEN (7) LEVELS, FROM
ZERO
(0) TO SIX (6). WITH THE UTILIZATION OF MULTIPLE DISKS, A
WIDE VARIETY OF WAYS IN WHICH DATA CAN BE ORGANIZED
IS MADE POSSIBLE. IN ADDITION, REDUNDANCY CAN BE
ADDED TO IMPROVE THE RELIABILITY OF DATA. NOTE THAT
THE LEVELS DO NOT IMPLY A HIERARCHICAL RELATIONSHIP
BUT ENCOMPASS DIFFERENT DESIGN ARCHITECTURES THAT
SHARE THREE (3) COMMON CHARACTERISTICS WHICH ARE:
1. IT IS A SET OF 2. DATA ARE 3. REDUNDANT
PHYSICAL DISK DISTRIBUTED DISK CAPACITY
DRIVES VIEWED ACROSS THE IS USED TO
BY THE OS AS A PHYSICAL STORE PARITY
SINGLE LOGICAL DRIVES OF AN INFORMATION,
DRIVEFIFO. ARRAY IN A WHICH
SCHEME KNOWN GUARANTEES
AS STRIPPING. DATA
RECOVERABILITY
IN CASE IF A
DISK FAILURE.
Caches
Are used by operating systems to improve file system read
performance, while their storage is often used as buffers to improve
write performance. Memory allocation performance can also be improved
through caching, since the results of commonly performed operations
may be stored in a local cache for future use (Gregg, 2021). The term
cache memory is used to indicate a memory that is smaller and faster to
access than the main memory. This actually reduces the average memory
access time by exploiting the principles of locality. The same principle can
be applied to disk memory. A disk cache is a buffer in the main memory
for disk sectors
(Stallings, 2018).
THANK
YOU

You might also like