You are on page 1of 84
‘ (C8312: Operating System 1 ts. Department of Computer Science Federal ‘University of Petroleum Resources, Effurun, Delta State CSC312: OPERATING SYSTEM II [ORHIONKPAZYO B. C] Compiled by OrbionkpaiyoB.C.[Decerber20%9] Upd Page i 4; ‘The goal of this course is to extend the stodens' fundamental knowledBe systems, It intends to acquaint students with the mechanisms involved im rere (C3312: Operating Syst COURSE GOAL in operating in process and memory management in contemporary Operating Systems. 2 COURSE OBJECTIVES Ate end ofthis couse, students shoulé beable a xvil, vill. xix. Coampiled by Ort Deserite the various ways of organizing memory hardware. Explain various techniques of allocating memory lo processes. Discuss in detail how paging works in contemporary computer SySUEILS, Diseuss both software and hardware solutions of the eiical-seetion problem Examine some classical process-synebfonization problems. Examine the tools that are used to solve process synchronization Describe various CPU-scheduling algorithms. Diseuss evaluation criteria for selesting a CPUsscheduling algorithm for « particular system, ‘Examine the scheduling algorithms of some operating systems Discuss the placement algorithms of some memory management techniques Explain the causes and types. of fragmentation and their possible solutions Differentiate between logical address and physical address [Explain the term race condition and state its causes State the critical section problem and illustrate the different methods for solving it Enumerate the conditions that must hold to achieve a good solution 10 critical problems section problem Different between semaphore and mutex locks and illustrate how each solves the critical section problem State the adverse effect of busy waiting in multiprogramming ystems “Explain the eause of deadlock and illustrate the concept using two processes “List the classical problems that are commonly used to illustrate the power of synchronization primitives Ilustrate classical synchronization problems using appropriate synchronization primitives oR. (December 2029) Vpéaced january 2023 Page2 CSC312: Operating System IE 3. COURSE DESCRIPTION ‘Concurrent: Switchiagsinterupts; Concurrent execution; Mutual exch States & State diagrams Stnuetures, Dispatching and Context problem and some solutions Deadlock; Models and mechanisms (Semaphones, monitors ctc.) Producer ~ Consumer Problems & Synchronization, Multiprocessor issues, Scheduling & Despatching. Memory Management: Overlays, Swapping and Partitions, Paging & ‘Segmentations Placement & replacement policies, working sets and Trashing, Caching. 4. COURSE SCHEDULE WEEE TOPICS: SUB FORICS |__| Course Orientation and the syllabus ‘Review of course syllabus 2-4 | Memory Management ‘Multipfe pertitioning Swapping Paging ‘Segmentation ‘Virwual Memory CPU Scheduling "Types of CPU scheduling] Criteria for Setecting a CPU- scheduling elgorithm ‘Scheduling slgorithms a. Ficst-ComeFlrst-Served Scheduling Shortest Job-First Scheduling Pricity Scheduling: Round-Robin Scheduling Multi-level Queue Scheduling Multilevel feedback — Queue Scheduling reees ‘Mutual Exclusion & Synchronization Principles of concurrency Mutual exclusion hardware support Semaphores Monitors Message passing Reader/Writer problem M113 | Deadlock & Starvation (a [Test & Revision enpe by Oankpaye Bc fowrerber 200] Updated January 2023 Principles of deadlock Deadlock prevention Deadlock Avoidance Dinning Philosophers problem Page 3 (C8082: Operating Sytem $ contacts 6 GRADING Attendance: -% Assignments/Laboratory —- 15% Test =12% Exam 10% 7 PoLictes @ —Atiendarice - It is expected that every student will be in class for lectures. Attendance records will be Kept and used to award marks for each student. Only students with up to or above 75% attendance wwill be awarded Smiks. Signing for friends is an act of DISHONESTY and itis HIGHLY PROHIBITED. i) Makeup test - There shall be no makeup test. The date for the test shall be announced to the class atleast one week before the test. Gif) Academie Integrity - Any form of academic dishonesty, such as copying during test /exam or making copies of other students’ work, is prohibited, (i). Submission of AssignmentsLab work — Late submission of assignment or lab work will be graded as Score - x, where x is the number of days of late submission. 9 TEXTBOOKS 1. Operating System Concep's, Abraham Silberchatz, Peter B. Galvin, Greg Gagne, Ninth Edition, 2012, Wiley. 2. Operating Systems: Intemals & Design Principles - Compiled by Orhionkpaye B.C (December 2019] Updated fauary2023 Pages 8C512: Operating System 1 1. MEMORY MANAGEMENT The main memory is central to the operation of a modem computer system. The CPU loads instructions only from memory, so any programs to mun must be stored there, Maint memory is 4 repository of quickly accessible data shared by the CPU and POdevices. The operating system is responsible for the following activities in connection with ‘memory management: ¥ Keeping track of which parts of memory are currently being used and who is using them ¥ Deciding which processes (or parts of processes) and data to move into and ‘out of memory: ¥ Allocating and deallocating memory space as needed ‘Thus, Memory management is the task carried out by the Operating System and ‘hardware {0 accommodate multiple processes in main memory, If only a few processes can be kept in main memory, then much of the time all processes will be waiting for /O and the CPU will be idle, © The purpose of memory management is to ensure fair, secure, orderly, and efficient use of memory. To improve both the utilization of the CPU and the speed of the computer's response to its users, © general-purpose computers must keep several programs in memory, ercating, ‘need for memory management. © thus, memory needs to be allocated efficiently in order to have as many processes in memory as possible © Many different memory management schemes are used, These schemes reflect various approaches, and the effectiveness of any given algorithm depends on the situation. er Page S Compied by Grhionkpaipe B.C. [Oecesber 2019] Updated January 2023, ll (63312: Operating Syste IL 4 occupies some fixed portion of * Tn most schemes, the kemel chared by multiple processes ‘main memory and the rest is General-purpose computers rin mast of their programs from rewritable memory, called main memory (also called Random-Aceess Memory, of RAM). The Programs and data do not reside in main memory permancatly. This amTengement ‘usually is not possible for the following two reasons: 1. Main memory is usualy too simall to store all needed programs and data permanently, 2 loses its contents when memory is a volatile storage device that power is tumed off or otherwise lost. ‘Thus, most computer systems provide secondary storage as an extension of main memory, thus providing virwal memory system © Virtual memory allows the computer to use part of @ permnanent storage device (such as a hard disk) as extra memory. ET Compledby OrhionkpaiyoB.¢. (December 2019] Updated january 2023, Page 6 8312: Opwiating System it 11 Memory Management Requirements Its helpful to kecp in mind the requirements that memory management intended to isfy. These requirements include the following: Relocation ii, Protection Sharing iv. Logical organization vy Physical organization + Relocation ‘The memory available has to be shared among various processes Present in a multiprogramming system. Thus, it is impossible to determine ‘What other programs would reside in the main memory duting the program execution, Ifthe active processes are swapped in and out of the main memory, it lets the OS have a bigger and better pool of processes that are ready to execute, Whenever any program happens to swap out to its disk memory, it would not always be possible that swapping it back into the main memory would let it occupy the previous memory location. It is because the location might still be occupied by some other process. The given pracesses might ‘have to relocate to a different section of the memory, After the program gets loaded into the main memory, the OS and the processor should be able to translate the logical addresses into the physical addresses. * The address generated by the CPU is said to be a logical address. An address generated by MMU is called a physical address Tee ene emer te ae geerneee er e at merentenemerm Cormplled by Orhlonkpaiye B.C. {December 2019] Updated January 2023 Page 7 C5C312 Operating Syrte tt + Protection Every process has to be protected against all unwanted interferences when All the other processes try ts write into a process, whether if is aecidental or incidental Processes should not be able to reference memory locations in another process without permission. ft must be checked at run time, The memory protection requirement must be satisfied by the processor (hardware) rather than the operating system (software). ‘The operating can protect the memory with the help of base and limit register. The requirement of memory protection has to be satisfied by the processor instead of the OS, since the OS can hardly control any given process whenever it occupies the processor. + Sharing Some protection mechanisms allow various processes to access a similar section of main memory. Thus, it allows all the processes to access the very sume copy of the program instead of having their awn separate copy, which has an advantage. For instance, numerous processes may utilize the very same system file, Itis natural to load a single copy of the given file in the main memory and to let it be shared by these processes, © Wis the memory management’s task to allow the shared areas of memory with controlled access without compromising the protection. Various mechanisms are uti supported sharing abilities. ed here to support the relocation Ne ees Complbed by Ortionkpaie .C[Decerber2019] Updated fauary 2023 Page SC312: Operating System M1 Logical Organization The main Memory is organized as a linear form, oF it could be an address space thats ene-dimensionsl. It comprises a sequence of words or bytes. Programs 2re written in medules. Modules can be written and compiled independently. ® Different degrees of protection are given to modules (read= only, execute-only), Modules are shared among processes, + Physical Organization A computer memory's structure has two levels, and they are referred to as the main memory and the secondary memory. The secondary memory is mainly used for data storage on a long-term basis, but the meaiie memory holds the programs currently in use, © moving information between these two levels of memory is a major Concer of memory management (OS) * it is highly inefficient to leave this responsibility: to the application programmer * Memory must be fairly allocated for high processor utilization for systematic flow of information between main and secondary memory i ee ee Coxpiled by Orhionkpalyo B.C [Decerber2015] Updated faruary 2023 . 9 CSCI: Operating System 1.2 Basics of iemrory Management In order to understand the concept of memory management, we will discuss some issues that are pertinent io managing memory: * basic hardware, * binding symbolic memory addresses to actual physical addresses, * the distinction between logical and physical addresses, and * dynamic linking and shared libraries. 421 Basie Hardware > Memory & Registers: Main memory and the registers built into the processor itself are the only general- Purpose storage that the CPU can access directly. All forms of memory provide an array of bytes, Bach byte has its own address, Interaction is achieved through a sequence of load or store instructions to- specific memory addresses. © ¢ The load instruction moves a byte or word from main memory to an internal register within the CPU, whereas © The store instruction moves the content of a register to main memory There are machine instructions that take memory addresses as arguments, but none: that take disk addresses, © Therefore, any instructions in execution, and any data being used by the instructions, must be in one ofthese dizect-access storage devices If the data are not in memory, they must be moved there before the CPU can operate on them, akpaiyo 8.C. (December 2005] Updated janvary 2023 C3C312; Operating System tt » Cache: Cache sits between main memory and CPU registers. Registers that are built into the CPU are generally aceessible within one cycle of the CPU clock. Completing a memory aecess may take many cycles of the CPU clock. © Tn such eases, the processor normally needs to stalf, since it does not have the data required to complete the instruction that it is executing. * This situation is intolerable because of the frequency of miemory accesses. © The remedy is to add fast memory (or eache) between the CPU and main memory, typically on the CPU chip for fast access. + Base and Limit Registers: A pair of base and limit registers define the logical address space, © The base register holds the smallest legal physical memory address; © The limit register specifies the size of the range, For example, if the base register holds 300040 and the limit register is 120900, then the program can legally access all addresses from 300040 through 420939 (inclusive) 2080 80000 veosc09 CPU must check every memory access generated in user mode to be sure it is between base and limit for that user, thus protecting the memory space Cored by Ochionkpaje 8. (December2019) Updated fanary 2023 Page 11 scan: Open Sytem oj s operating. ‘© Any attempt by a program exeeuting in user made to acces Pont erat System memory of ather users’ meinory results in # trap to the Operating system, which treats the attempt asa fatal error (see diagram belo) ‘The base and limit registers can be loaded only by the operating system, which uses 4 special privileged instruction, @ Since privileged instructions can be executed only in kemel mode, and since only the operating system executes in kernel mode, only the operating system can load the base and limit egisters. ‘> This scheme prevents a user program from (accidentally or deliberately) modifying the code or data structures of either the operating system or ather “users, 1.22 Address Binding ‘Usually, a program resides on a disk as a binary executable file, To be executed, the program must be brought into memory and placed within a process. ‘© The processes on the disk that are waiting to be brought into memory for execution form the input quewe or job queue, ‘The normal single-tasking procedure is to select one of the processes in the input ‘queue and to load that process into memory. ‘© Asthe process is executed, it accesses instructions and data from memory. © Eventually, the process terminates, and its memory space is declared available, —$<$< Compiled by Orblonkpabe B.C (December 2009] Update jrsary 2023 Page 12 1680312: Operating System Addresses in the source program are generally symbolic (such as the variable count) © A compiler typically binds these symbolic addresses to relocatable addresses © The linkage editor or loader in turn binds the relocatable addresses to absolute addresses. Each binding is a mapping from one address space to another, Classically, the binding of instructions and data to memory addresses can be done -atany step along the way: * Compile time: If memory location known a priori, aésoitue code can be generated; must recompile code if starting location changes © For example, if you Know that a user process will reside starting at location &, then the generated compiler code will stert at thet Location and extend up from there, If, at some later time, the starting location changes, then it will be mecessery to reconipile this code, * Load time: Must generate relocatable code if memory location is not Known at compile time © Execution time: Binding delayed until run ti during its execution from one memory segment to another. Need hardware 1¢ if the process can be moved support for address maps (e.g, base and limit registers). 1.2.3 Logical Versus Physical Address Space ‘An address generated by the CPU is commonly referred to as a logical address also referred to as virtual address, whereas an address seen by the memory unit—that is, the one loaded into the memory-address register of the memory—is commonly referred to as-a physical address. © Logical and physical addresses are the same in compile-time and load-time arddress-binding selemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme Campledby Oforipaye 8. [Decenber2019) Updated ancy 2023 Page 13 €©5c312: Opening Spot t vest js a logical addregg © The set of all logical addresses generated by @ progrem™ space. i ical addresses ig © The set of all physical oddeesses comesponding to these logics! 4 physical address space, Program issues addresses ina logicalvrtual address space which Must by ‘Tanslated to physical address space ‘© Think of the program as having a contiguaus logical/virtual address space ‘that starts at 0, ‘© and a contiguous physical address space that starts somewhere else ‘Thus,:in the execution-time address-binding scheme, the logical and physical address spaces differ. © The run-time mapping fiom virtual to physical addresses is done by a hardware device in the CPU called the memory-management unit (MMU). The MMU bas two special registers that are accessed by the CPU's control unit. © A data to be sent to main memory or retrieved from memory is stored in the Memory Data Register (MDR). © The desired logical memory address is stored in the Memory Address Register (MAR). Logical adcresses In MMU scheme, the value in the relocation register (je, the base register) is added to every address generated by a user process atthe time its sent to memory. ‘Gomnpaeby Orbionkpaye B.C-(Decersber 2019) Updazed Jemsary 2023 Pagel! (CSC312: Operating System tt — For example, ifthe base is at 14000, then an attempt by the user to address Jocation 0 fs dynamically relocated to Location 14000; an access to lacation 346 is mapped to location 14346. — We now have two difforent types of adresses: © logical addresses (in the range 0 to ma) and © physical addresses (in the range R +0 t0.8 + max for a base value 2). 124 Dynamic Loading With dynamic loading, a routine is not loaded until it is called, © All routines are kept on disk in a re-locetable load format, © The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first ehecks to see ‘whether the other routine has been loaded or not © If it has not, the re-locatable linking loader is called to load the desired routine into memory and to update the program's address tables to reflect this change. Then control is passed to the newly loaded routine. Dynamic loading does not require special support from the operating system. © Itis the responsibility of the users to design their programs to take advantage of such a method. © Operating systems may help the programmer, however, by providing library routines to implement dynamic loading. Compttdby Orhionkpaiyo..C. (December 2019} Updated [ansary 2023 Page 15 (C5312: Opecating System The advantage of dynamic loading is that # routine | Joaded only when it ig needed. This method is patclrly useful when large amu of code a nerdeg to handle infrequently occurring eases, such as error routines. / © Inthis case, although the total program size may be large, the portion that i used (and hence loaded) may be much smaller. 125 Dynamic Linking and Shared Libraries ‘Dynamically linked libraries are system libraries that are linked to user programs when the programs are run. © iA dynamic link library (DLL) is a collection of small programs that ean te loaded when needed by larger programs and used at the same time. © The small program lets the larger program communicate with a specific sdeviee, such as a printer or scanner. © ‘It is often packaged as @ DLL program, which is usually referted to as a DLL file. ‘One of the main tasks for linker is to make code of library functions (eg printi(), scanf(), sqrt(), tc) available to user program. Some operating systems support only state inking, in which system libraries are treated like any other objeet module and are combined by the loader into the binary program image (c.g. lib files in Windows) Dynamie linking, in contrast, is similar to dynamic loading, Here, though, linking, rather than loading, is postponed until execution time. 0 This feature is usually used with system libraries, such as language ‘subroutine libraries. © With dynamic linking, a stub is included in the image for each library routine reference, ={Decernber 2015] Updated fancary2023 Page 16 ee CSCS12: Operating Sytem tt + The stub is a small piece of code that indicates how to locate the appropriate memory-resident library routine or how to load the library ifthe routine is not already present. Unlike dynamic loading, dynamic linking and shared libraries generally require help from the operating system. 9 If the processes in memory ate protected from one another, then the operating system is the only entity that can check to see whether the needed routine is in another process's memory space or that can allow mult Processes to access the same memory addresses. ———— Compl by Orhonkpaj 6 ¢[Deceber 2019] Updated juary 2023 Page 37 Ccsc3t2: Operating Sytem 4 13° Methods of Memary Management ‘Memory management methods include: i. Swapping ii. Panitioning iif, Segmentation iv, Paging, and ¥. Virtual memory 13.1 Swapping A process needs to be in the memory to be executed. What if there is no enough memory for all processes? A process, however, can be swapped temporarily out of memory to a ducking store, and then brought back into memory for continued ‘execution, © Backing store is a fast disk large enough to accommodate copies of all ‘memory images for all users; it must provide direct aocess to these memory images. rai momar © Swapping makes it possible for the fofa! physical address space of all processes 10 exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system. ee Compled by Oronkpv 3. [Deerber 2019] Updated ansay 2023 Page 18 (¢SC312: Operating System Tt ‘The system maintains a ready queue of all processes whose memory images are on the backing store or in memory and are ready to run, Whenever the CPU scheduler decides to execute a process, it calls the dispatcher, © The dispatcher checks to see whether the next process in the queue is in memory, © If itis not, and if there is no free memory region, the dispatcher swaps out a Process currently in memory and swaps in the desired process. © It then reloads registers and transfers control to the selected process. ‘The context-switch time in such a swapping system is firly high. ‘© To get an idea of the context-switch time, let's assume that the user process is 100 MB in size and the backing store is a standard hard disk with a transfer rate of 30 MB per second, © The actual transfer of the 100-MB process to or from:main memory takes 18@ MB/50 MB per second = 2 seconds The swap time is 2000 milliseconds (j.e. 2000ms) © Since we must swap both out and in, the total swap time is about 4,000 milliseconds. ‘The total transfer time is directly proportional to the amount of memory swapped. © If we have a computer system with 4 GB of main memory and a resident operating system taking 1 GB, the maximum size of the user process is 3 GB. To reduce the cost of memory swap time, © by knowing how much memory is really being used © System calls to inform OS of memory use via request_memory() and. releast_memory() Standard swapping is not used in modem operating systems, tut modified Version ‘common + Swap only when free memory extremely low sa -van NEE Rneeeeeneemereemeeaeseet eed ‘reply Orbonkgnyo® c [Decber 2009] Updo january 2023 Page 19 (©8312; Operating Systems * Swapping on Mobite Systems Although most operati ‘version of swapping, 4h servers support SOME Modifeg | systems for PCS an : ‘coin any fom © Mobile systems typically do not support swapping it a8 her than y rather * Mobile devices generally use flash meron” 7 spacious hard disks as their persistent storage. / © The resulting ‘Space constraint is one reason why mobile operating-System, ] designers avoid swapping. © Other reasons include + the limited mumber of writes that flash memory ean tolerayg before it becomes unreliable and * the poor throughput between main memory and flash memory in these devices, Instead other methods are used to free memory if low © 108 asks apps to voluntarily relinquish allocated memory * Read-only data thrown out and reloaded from flash if needed * Failure to free can result in termination © Android terminates apps if low free memory, but first writes application state to flash for fast restart 132 Partitioning > Single Partitioning (Contiguous Alfocation) | The main memory must accommodate both operating system and the various user spaces, | The memory is usually divided into two partitions: ® one for the resident operating system and. = one for the user processes, ‘The operating system may be placed inthe high memory or the low memory. © The position of the interrupt vector usually affects this decision, ee Compted by Orlonpsio B.C [December 2018] Updated [anairy 2022 Page 20 (CSCHZ Operating System It Since the interrupt vector is often in the low memory, programmers place the OS in low memory too. © Itis desirable to have several user processes residing in the memory at the same time In contiguous memory allocation, each process is contained in a single contiguous section of memory, © Relocation registers are used to protect user processes from each other, and from changing operating-system code and data * Base register contains value of smallest physical address (for ‘example, relocation = 100040). * Limit register contains range of logical addresses - each logical address must be less than the limit register (for example, limit = 74600). : The MMU maps the logical address dynamically by adding the value in the relocation register. This mapped address is sent to memory as shown below Hardware support for relocation and limit registers } Muttipte Partitioning In a uni-ptogramming system, main memocy is divided into two parts: one part for the operating system (resident monitor, kernel) and one part for the program currently being executed, In a multisprogramming system, the “user™ part of memory must be farther subdivided to accommodate multiple processes. oO ‘ocpledbyOrsenipipe BG [December 2019) Updned acary 093, Page 21 syne (65312: Opera «sag These include: | fioning- ‘There ae variations of achieving ths mune PBN i. Fixed partitioning, ii, Variable partitioning ‘& Fixed Partitioning Physical memory is broken up ito fixed pation © pltitions may have different sizes. But PA . P © Each partition may contain exactly one PY snber of partitions 4 from the input queue andj terminates, the partir, sioning never changes smultiprogramming is bound by them © When a partition is free, a process is selestes loaded into the free partition. When the Process becomes available for another process: © This method was originally » used by the IBM 08/360 ope! no longer in use, sting system (called MFT) bun ig ‘There are variations of fixed partitioning: i Bqualesize partitions fe Any process whose size is less than or equal to the parlition size ean loaded into an available partition co The operating system can swap out a process if all partitions are ful no process is in the Ready or Running state Disadvantages of equal-sice partitioning ¥ A program may be too big to fit in a partition ¥ Main memory utilization is inefficient. co Any program, regardless of size, occupies an entire partition 6 Internal fragmentation. Waste space due to the block of data being smalies than the partition Coomplled by Orhanipaiye B.C December 2019} Updated January 2023 ‘CSC312: Operting System I HL Uneqitalsize partitioning Assign cach process the smallest partition to which it will fit Advantages: ¥ Process ate always assigned in such a way as to minimize wasted memory ‘within a partition —+ intemal fragmentation “Relatively simple and require minimal OS software and overhead Disadvantages: ¥ Limitations on the active number of Processes, number of partitions specified at system generation time ¥ Small jobs cannot utilize pattition space efficiently; In most cases it is an inefficient technique Prema road ‘Oooo pen re Memory assignment for fixed partitioning Compiled by Orhionkpalyo B.C [December 2015] Updated January 2023, Page 23 0 cscs opens w Placement Algorithm fr fced parti?” 5 Equal-size partitions © Because all partitions are of equal « used 0 Unequat-size partitions seithin which it ill ion ‘© Can assign each process to the smallest partitio © Queue for each partition size © Processes are assigned in such a Way & a partition > Variable/Dynamic Pastitioning eg cally — pastitions are ti Physical memory is broken up into partitions dynam and. number cory as it requires ry from a hole large enough to programs. Partitions are of variable length © Process is allocated exactly as much mem © When a process arrives, it is allozated memo accommodate it «A Hole isa block of available memorys are seattered throughout memory © The operating system keeps a table indicating which parts of memory available and which are occupied. This technique was used by IBM's mainframe operating system, o holes of various » Disadvantages of Dynamic partitioning © Memory becomes more and more fragmented with time o Memory utilization declines ‘Complledy Orbianpaiye 8.C (December 2039] Updated fanvary 2023 €SC312: Operating System it sor | meet} so xa | oo a ® = a = f i g I g @ a acest a [Frecean | nan f= meer ten bisowesth vse bi ut oo Ste Ete te ® @ Effect of dynamic partitioning > Placement Algorithms for Dynamic Partitioning First fit: Allocate the frst hole that is big enough. © Searching can start either at the beginning of the set of holes or at the Tocation where the previous first-fit search ended. © We-can stop searching as soon as we find a free hole thet is large enough. © The idea is to minimize the time to analyze the memory space available Best fit: Allocate the smallest hole that is big enough, © One must search the entire list, unless the list is ordered by size, ©. This strategy produces the smallest leftover hole. © The Idea is to minimize wastage of free memory space Compiled by Orhionipaiye 8. [December2019) Update amiay 2072 Page 25 on | el €8¢412: Opeating } y : ] Worst fits Allocate the largest hole. se ig gorted by size. P it is sorte © Again, one must search the entire list, unless ’ which may be more. © This strategy produces the Inrgst levee Hol ach. than the smatier lever hole fom a best-fit PP et ROCESS C20 U8 the © The idea is to inerease the possibility that noth 4 over space rocess USINg the The example below ilustrte the allocation of 12Kb (0 2 P ‘hey algorithms » Fragmentation ‘The allocation and de-ellocation of memory creates a condition called Fragmentation. This means that there are lots of small fragments (or holes) of fee memory that cannot be used by any other process, This results in a reduction in he amount of total available memory. @ External Fragmentation - Gaps between allocated contiguous memory + total memory space exists to satisfy a request, but it is not contiguous © Dynamic partitioning problem = Internal Fragmentation — allocated memory may be slightly larger than Tequested memory; this size difference is memory intemal to a Partition, but nol being used © Fixed partitioning problem j (Cemplled by Grbloimplyo B.C [December 2019) te aarp 2023 Pagtth (€8C312: Operating System It » How to solve fragmentation problem Reduce external fragmentation by compaction * Shuffle memory contents to place all free memory together in one large block © Swap a program out © Re-load i, adjacent to another © Adjust its base register * Compaction is possible omly if relocation is dynamic, and is done at pation | paciona] partion 2) rian 3 > = [patton execution time parttion | Exercise Given free memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), i ‘How would each of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K, 112K, and 426K (in order)? ii, Which algorithm makes the most efficient use of memory? en EE Comped by OionkpalyoB.€ [Decenber 209] Updated Page 27 at ay, sn ©8312: Qpernting SY — a 133 Papi / ! ee sn ys EONS Ut Paging i a oehnigue of memory mangement in WH into blocks ofthe same size ealled pages voles bres «called frames and alled pages ‘The basic method for implementing pagie i= © physical memory into fixedesized block © logical memory into blocks of the same 82* “ i are loaded into When a process isto be executed, its comesponing PARES ay available memory frames, + A frame isa fixed-length block © Operating system keeps trac of all free frames: © Operating system needs n fee frames 10 on 4 PF ofmain memory gram of size m pages. ‘The hardware support for paging is illusteated in the diagram below. Every addres, generated by the CPU is divided into two parts: ©. Page number (p) ~ page number is used as an index into a p5Be table which contains base address of each page in physica! memory. © Page offset (d) — page offset ig combined with base address to define the physical memory address. ‘age ble mnewory Paging Hardware piled by Orkiockpalyo (December 2099] Updated ‘CSC312: Operating System The paging model of memory is shown in the following diagram frame number page 0 0) ol4 Pane) pagot ste 1] pages pagee ae 2 317 pages page table logical mammary physical memory Paging model of logical and physical memory The page size (like the frame size) is defined by the hardware. The size of a page is a power of 2, varying between 512 bytes and | GB per page, depending on the computer architecture. © The selection of a power of 2 as a page size makes the translation of @ logical address into a page number and page offset particularly easy. (© Ifthe size of the logical address space is 2”, and a page size is 2° bytes, thea the high-order m-n bits of logical address designate the page number, and the m low-order bits designate the page offSct. “Thus, the logical address is as follows: page number page offset 7 t n- ” where p is an index into the page table-and dis the displacement within the page. Compiled by Orblonkpaive B.C [December2019] Updated January 2023 Page 29 (C8312 Opersting Syste pelow. Here, in the As a concrete example, consider the memory in the ae and physi logical address, n= 2 and m = 4, Using a page si2° of ie tia Mey j ‘memory of 32 bytes (8 pages, we show how the proBT= an be mapped into physical memory. Logical addcess 0 i page 0, offset 0 sa frame 5 © Indexing into the page table, we fin hat page 0 8 A 3 20 (= (5x4) Thus, logical addeess 0 maps to physical odds } +4). = , BEG x4) ° Logical address 3 (page 0, offset 3) maps to physical address Rois 31 . table, page 1 is © Logical addtess 4 is page 1, offset 0; according to the page TIS PORE Tis Mipped to frame 6, _ Thus, logical address 4 maps to physical address 24 (6 «4) +0]. © Logical address 13 maps to physical address 9. hyaitall memory Paging example for a 32-byle memory with 4-byte pages Complled by Orbionipaiya B.C. [December 2009] Updated January 2073 paged (€9C312: Operating Systern IT Advantages of Paging © Easy to allocate: keep a free list of available pages and grab the first ane. ogy to swap since everything is the same size, which is usvally the same size as disk blocks to and from which pages are swapped. © Extemal fragmentation is avaided by using paging technique © any fiee frame can be allocated to a process that needs it. Disadvantages of Paging © Efficiency of access: even small page tables are generally t00 into fast memory in the relocation box. Instead, page tables are kept in main ‘memory and the relocation box only has the page table's base address. It thus large to load takes one overhead reference for every real memory reference, © Internal fragmentation: page size doesn't match up with information size, ‘The larger the page, the worse this is, But no external fragmentation. 13.4 Segmentation Segmentation supports the user-view of memory that the logical address space ‘becomes a collection of (typically disjoint) segments. oA segment is a variable-length block of data that resides in secondary memory ‘9. Segments have a name (or a.number) and a length. © Addresses specify segment, and offset within segment © To access memory, user program specifies segment + offset + Logical address = segment name (number) + offset Segmentation permits the physical address space of a prooess to be non-contiguous. ‘Typical segments include rT Compiled by Ordionkpalyo B.C [December 2019) Updated January 2023 Page ere oth cecaizopentint ° slobal variables © procedure call stack code for each function ° ° local variables for each large data structures i table soument a0 physical memory Tian _|_ base ns} raise protection fault Segmentation hardware {In the segmentation hardware above, each entry inthe segment table Bas a segment base and ¢ segment limit, ‘© The segment base contains the starting resides in memory, and the segment limit specifies the length of the segmen 6 A logical address consists of two parts: a segment number, and an offset ito that segment + ‘The segment number is used as an index to the segment table = The offset of the logical address must be between 0 and segment limit, If it is not, we tap to the operating s (logical addressing attempt beyond end of segment), physical address where the segmen! co When an offset is legal, in physical memory of the desired byte. added to the segment base to produce the a¢ co The segment table is thus essentially an array of base-limit register pairs Compiled by Orhionkpalya®: C.[Oecember 2019] Updated january 2023 {€8C312: Operating System IL Example to illustrate Segmentation: We have five segments numbered from 0 through 4, The segments are stored in physical memory as shown in the diagram below, The segment table has a separate entry for each segment, giving the beginning address of the segment in physical memory (or base) and the length of that segment (or limit), For example, = segment 2 is 400 bytes long end begins at location 4300, Thus, a reference to ‘byte 53 of segment 2 is mapped onta location 4300 + 53 = 4353. © A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 = 4082, = A reference to byte 1222 of segment 0 would result in a trap to the operating system, as this segment is only 1,000 bytes long: logical adress space Example of segmentation ‘Compiled by O“hioakpalya B.¢.[December 2019] Updaved January 2023 Page 33 " a sn operate cscs: ge independently, access Outside of Advantages: and ass © Seament canbe share, says passite © #8 © Segment has defined tength 30 it #8 Mi Segment, enor seement ned 0 * Tethrows new type 0 Disadvantages ory into mem © Itis not easy to place the segment io! ~ Segments has different size é json, one addition) — Memory fragmentation eas one compar — Segment moving has big over faruary 2023 B.C fDecember2019) passed Cernpieby Orkiosepato' (©SC312: Operating System II Exercises 1. Consider the following segment table: Segment Base engi o 29 600 q 2300 4 2 90 100 3 1327 580 4 1952 96 ‘What are the physical addresses for the following logical addresses? 0,430 b110 62,500 4.3,400 4,112 2. Assuming a I-KB page size, what are the page numbers and offsets for the following address references (provided as decimal numbers): a. 3085 b. 42095 ©. 215201 4. 650000 e. 2000001 3. Although Android does not support swapping on its boot disk, it is possible to Set up a swap space using a separate SD non-volatile memory card, Why would Android disallow swapping on its boot disk yet allow it on a secondary disk? _— Compiled by Orbioakpsiye BC. [December 2019) Updated January 2023 Page 35 0 C12 opens SH" 13.5 Virtual Memory 1 gp nd 4B FRAN, c ‘Despite the fact that most modem computers hae ben 0 00 processes. ary itis posit toon out of memory. a8 MPP opory tan cuTEnly 0 «ag nor uniting atthe some time, or when a process 1e4U° mentory’ i v available (unused), One solution to this problem °° van of provesses that are ation © Virtual memory isa technique that allows the not completely in memory will use 2 portion of ‘This means that the operating system's mezi0"Y managet another storage device to at as if it s extra RAM- an a permanent Storie © In most eases, this viral memory space wil PF device such as a hard disk. as Windows + However, some newer operating system i rouse @ Vista and Windows 7) also allow on? device like a USB flash drive 6 virtual memory (called Windows Ready Boost). © The memory manager will manage this res0! that the computer has more RAM than is actually installed. Advantages © One major advantage of this scheme is that programs can be larger than physical memory. removable media nce so thot it gives the ilusion o Further, virtual memory ebstracts main memory into an extremely large, ‘uniform array of storage, separating logical memory as viewed by the user ‘rom physical memory. © This technique frees programmers from the concems of memory-storage limitations. Complley Orhloskpalyo B.C [December 2019] Updated Janeary 2023, Page 36 (C8312: Operating System TL Disadvantages © Virtual memory not easy to implement © May substantially decrease performance ifit is used carelessly > Luptementation o of Virtual Memory ‘One common technique for implementing virtual memory is demand paging. © With demand-paged virtual memory, pages ere loaded only when they are demanded during program execution, © Pages that are never accessed are thus never loaded into physical memory. ‘A demand-paging system is similar to a paging system with swapping (see diagram below) where processes reside in secondary memory (usually a disk). When @ process is to be executed, it is swapped into memory, Rather than swapping the entire process into memory, though, a pager (popularly called fazy swapper) is used. © A pager is concemed with swapping the individual pages of a process that will be needed. ©. Itmever swaps a page into memory unless that page will be needed. . =] | spon SOs 2030 aE afro oh : eo soon sees soc hertpiotpary tablets ‘Transfer of a paged memory to contiguous disk space. ——— Complied by OchioripaiyoB.¢ [December 2019] Wpdatedfanuary 2023 Page 37 cscaa-opee - ic When a provess is to be swapped in, he page’ 8 ai before the process is swapped out agtin rings ony 05 AES © Instead of swapping in a whale process PE memory, © Thus, it avoids reading into memory PAgeS decreasing the swap time and de amount of PHYS ages Will be Useg 6 used anya gil not b thot will needed. eal memory ot ae necded 10 cistingus, ‘With this scheme, some forms of hardware SUPP oii between the pages that are in memory and the pages thst aT? ‘The valid-invalid bit scheme is used for this purpose: ‘© When this bit is set to “valid,” the associated POB® memory. © If the bit is set to “invalid,” the page either is 0" ™* logical address space of the process) or is valid but is “he pagetable entry fora page tha is eougit oto memory is et #5 Usual Bot the Dage-teble entry fora page thet isnot curently in memory is either simply mares invalid or contains the address of the page on disk. This situatio is depicted in the diagram below js both legal and iq valid (that i, not in the urrently on the disk, Page table when some pages are not in main memory CCompited by Orblonkpaiyo B.C [Deceriber 2029) Updated january 2023. an CSCI12: Operating Systern M But what happens if the process wies to access a page that was not brought into memory? ° Aseess toa page marked invalid causes:a page fault. The paging hardware, in translating the address: through the page table, will notice ‘hat the invalid bit is set, causing a trap to the operating system. © This ttap is the result of the operating system's failure to bring the desired Page into memory, ‘The procedure for handling this page fauk is strightforward (See diagram below): ¥ We check an internal table (usually kept with the provess control black) for this process to determine whether the reference was a valid or an invalid memory access. If the reference was invalid, we terminate the process. If it was valid but we have not yet brought in that page, we now page it in. ¥ We find a free frame (by taking one from the free-frame list, for example). We schedule a disk operation to read the desired page into the newly allocated frame. Y When the disk read is complete, we modify the internal table kept with the process and the page lable to indicate that the page is now in memory. ¥ We restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory. Compl by Orhionkpayo 8.¢ [Decembe 2019] Updated jacuary 2023 Page 39 Steps in handling a page faull ? Page replacement While a user process is exeouting, a page fault occurs. The operating system determines where the desired page is residing on the disk but then finds that there are no free frames on the free-frame list; all memory isin use. ‘The opetating system has several options at this point. © Tt could terminate the user process © The operating system could instead swap out a process, teeing all its frames and reducing the level of multiprogramming. (© Find one that is not currently being used and free it + This is the idea of Page replacement Page replacement takes the following approach, @ Ifno frame is free, we find one that is not currently being used and free it. © We can five a frame by writing is contents to swap space and changing the page table (and all other tables) to indicate that the page is no longer in memory —_—_—_—_—_——____ Compiled by Orhionlepaiye B.C. fDecember2015] Updated anuszy 2023, Page 40 "C8C512: Opeating Sytem It o We can now use the freed frame to hold the page for whieh the process faulted, The above steps are modified as follows: 1. Find the location of the destred age on the disie. | 2. Find a fice frame: a. there is @ free frame, use it b. If there is no free frame, use a page-replacemtent algorithm to select & victim frame. ©. Write the victim frame to the disk; change the page and ftame tables accordingly, 3. Read the desited page into the nevly freed frame; change the page and frame tables. 4. Continue the user process from where the page fault occurred. Notice that, if no frames are free, tive page transfers (one out and one in) are required. This situation effectively doubles the page-fault service time and increases the effective access time accordingly. ame, dinate OSE | lore | Sasa POE tae 5 Qn Pe ‘ psc Page replacement oF Compiled by Orhionpago B.C (December 2019] Update | escnin opentevme * Page replacement Algorithins | Bwery operating yy, There ate many different page-replacement sl60% = Probably has its own replacement seheme- © How do we select a particular replaceme * Ta general, we want the one *i 1 algorithen? pine lowest page ft rae, i ff memory refer We evaluate an algorid by runing it ona pases S108 © ay ‘nd computing the number of page faults. ime © The string of memory references is called # referenc 5 nce string and To determine the mumber of page faults for « particular seference SSNS and pap, replacement algorithm, bl ilable. © We also need to know the mumber of page frames Sv@HI# set of pase & tr umber Of page © a the mumber of frames available ineeases, the ee ky: decreases. © adding physical memory increases the number of frames. it } We next illustrate.several page-replacement algorithms. In doing 30, We use the reference string 7,0, 1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,01 for a memory with three frames. > FIFO Page Replacement The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm, A FIFO replacement algorithm associates with each page the time when that page was brought into memory. © When a page must be replaced, the oldest page is chosen. Notice that it is not strictly necessary to record the time when a page is brought in. @ We can create a FIFO queue to hold all pages in memory. Compiled by Orhionkpaive B.C [December2013) Updated anaary 2023 Page #2 sex 12 Operating Syren tk o We replace the Page at z memory, we ince the head of the queue, When a page is brought inte ‘tat the tal of the queue, Illustration: For our example referer ‘ice string, our three frames ar initially 6 emply. ‘The fist three refere ge | empty frames, (7, 0, 1) cause page faults and are brought into these The next refere ice (2) replaces page 7, because page T was brought in frst Since @ is the next reference and 0 i : , this reteenge, 14 0 is already in memory, we have no fault for o The first refe ae ‘rence 10 3 results in replacement of page 0, since it is now first in ne oom Of this replacement, the next reference, to 0, will fault. co Page 1 is then replaced by page 0, This process continues as shown in the diagram below. Every time a fault occurs, We show which pages are in our three ftames. There are fifteen faults altogether reference string 7o1 2 O38 OF 2RORBR21A2OF ros Bee AEE fa a ca fo} {al fol | ; 5 pee beees BEE rama FIFO page-replacement algorithm > Least Recently Used (LRU) Algorithm. LRU replacement associates with eack page the time ofthat page's last use. When a page must be replaced, 9 LRU chooses the page that has not been used for the longest period of time. The result of applying LRU replacement to our example reference string is shown in the diagram below, The LRU algorithm produces twelve faults. Coupled by Otley .C [Deere 2019] Updzed ameary 2025 Page 43 soa ‘escsi: operat releronce ating gree ea 7oOr2e2ee80229°%° gq f § fag @ eee ao Hee & pov oe @ & Hkh fb bee page frames: LRU page-replacement aigorith™ Other replacement algorithms includs i, Second Chance ii, Enhanced Second Chance ili, Least Frequently Used (LFU) iv. Most Frequently Used (MFU) ¥. Stack algorithm ‘These algorithms are left as research topics for stidents- > Hit Ratios: Determining Which Replacement Policy to Use between main memory and the Using each of the replacement policies to swap pages page file will result in different hit ratios A hit ratio is the number of times that a page is actually found in main memory, as opposed to the mamber of page faults generated (requiring the memory manager to retrieve the page from ‘virtual memory). © To calculate the hit ratio, we divide the number of non page faults by the total umber of page requests (the number. of times that data has been seat between the CPU and main memory), © Page faults occur when a page is requested by the CPU that is not currently in the page table (in other wards, not currently in RAM). pao 8. [December 2015] Updaai anu Page 4# CSC312; Opera. ig System I Calculating Hit Ratios ‘To understand how to calevlate hit ratios, we will examine an example that Uses @ RAM space of three (3) frames, and the following processing sequence: 12131415232415 Jn these examples, Y = a hit (the page is found in RAM), and N = a page fault (the page must retrieved from virtual memory FIFO Algorithm Hit? xi w " Framed [1 7 mT Frame 2 2 aL Frames, sy Hit Ratio | sa+ LRU Algorithm 4 Hit? x [x 7" me IN Framed [1 L_[s3 35 Frame? 2 al i 2 Framed s{” Bit Ratio | 424 > Thrashing \ *% ‘Thrashing is a condition where a process ig busy swapping pages in and out due to insufficient frames in physical memory. \ © A process is thrashing if it is spending fore time paging than executing user instructions. : \ © Aprocess is swapped out just before it is needed Comedy Orkionpige6.C(Decenber 2019] Updated antary 2023 Page 45 ‘Thrashing results in severe performance proven 2 Low CPU utilization © High disk tttilization © Utilization of other YO devices » Causes of Thrashing Thrashing is typically caused by too many processe> the processes will compete for the limited amount of FI running at the same time, All rysical memory installeg g, the computer, ; ‘ onding (or run y © Ifthe computer is thrashing, appliations may stoP FSP e slowly). » How to Prevent Thrashing = The current best practice in implementing a computer facility is to include ‘enough physical memory, whenever possible, to avoid thrashing an swapping ‘0 From smart phones through mainframes, providing enough memory ig Keep oll working sets in memory concurrently, except under extreme conditions, gives the best user experience, = We can limit the effects of thrashing by using a local replacement algorithm (or priority replacement algorithm), © With local replacement, if one process starts thrashing, it cannot steal frames from another process and cause the latter to thrash as well However, the problem is not entirely solved, = To prevent thrashing, we must pro\ needs. © One technique to know the number of frames a process needs is the a process with as many frames as it ‘working-set model. — Complied yp O*hlonkpaiyo B.C (December 2019} Updare anuary2023 Page #6 Process is actually using. Ifa page is in Active use, it will be in the working set. If is no Nonger being used, i will ro Soni the woiking set Exercises 1. Under what circumstances do page faults occur? Descibe the action takes ty the operating system when «page faut occurs 2, Consider the folowing page reference sing 1,2,3.4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3.6 How inany page faults would occur for the following replacement algocithms, assuming one, Wo, three, four, five, six, and seven fiames? Rememiber that all frames are initially empty, so your first unique pages will cost one fault each, i, LRU replacement ii FIFO replacement 47 ‘mpi pyornterigatyo3.¢ (December 2019] UpdatedFanutry 2023 Page (C5C312; Operating Speer I 2 CPUSCHEDULING pion oF 2008 1p A proses fs exceed mitt mst ai, peal foe request ' its idle © Ina simple computer system, the CPU then just SI i + wok is nscormplishe © All this waiting time is wasted; no useful ing systems. By switching CPU scheduling is the basi i-programmed oper ig is the basis of nmulti-progr ake the computer more the CPU among processes, the operating system can ™ productive, ‘ © The objective of mubiprgramming ist have some PORES THONG & al times, to maximize CPU utilization. © Several processes are kept in memory at one time. Every time one process has to wait (perhaps for UO opel), saee sea can take over use of the CPU. ‘The CPU is, of course, one of the primary computer is central to operating-system design. * CPU scheduling deals with the problem 0 the ready queue is to be allocated the CPU. sources, Thus, its scheduling deciding which of the processes in 21 Basie Concepts > CPU=VO Burst Cyele The success of CPU scheduling depends on CPU-UVO burst cycle since process execution consists of a cycle of CPU execution and 1/0 wait co Processes altemate between these two staes. Process execution begins with a CPU burst. That is followed by an 1/0 burst, which is followed by another CPU burst, then another UO burst, and so on (see diagram below) © CPU burst —process bound to CPU © VO-burst - process bound to VO Lémowcl of ——— ec ‘Compiled by Orhionkpalyo B.C [December 2019] Updaied January 2023 Page 48 SC312. Operating Syste It fo CPU Burst time ~ amount of time the process u ses the prac itisno. longer ready (tosag coud “= wi bine > Rmesed of iio Soot cis is boad beta 0 me on CPU bust Vorburet CPU bunt vobunt load atoro ead fom te Se eating, Uoraret Altemating sequence of CPU and I/O bursts > CPU Scheduler Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. co The selection process is caried out by the shorieterm scheduler, or CPU scheduler. (o The scheduler selects a process from the processes in memory that are ready to execute and allocates the CPU to that process. ‘© Conceptually, all the processes in the ready queue are lined up waiting for 2 chance to run on the CPU. cid ule aur ae , prea A pcos Ginedt be sep een Hf a By po » Feenjibe Non-preemipiive Scheduling °°" CPU scheduling decisions may take place when a process: 1, Switches from running to waiting state (UO request). lor P* 2. Switches from running to ready state (when interrupt occurs Se onap te emplive Fn EEE yo Conpaety orionkpiyB-€ (Decne 2019) pated amuary7023 Pare é (€8C312: Operating System Doo poet ccgeoth 3. Switches from wating to ready (eompleion of"? 4.Terminates S19 on~ prt ample Scheduling under 1 and 4 is non-preemptive: cooess keeps the CPU © Ones the CPU has been allocated to a process tie PF itching to the waiting until it releases the CPU either by terminating or by state, Scheduling under 2 and 3 is preempti : «gh priority process © Allows a running process to be interrupted by a high prio tunity to © The process remains in the ready queue onl it-gts the neat OPPO exeoute Preemptive scheduling incurs acost. / ‘© Assume that two processes sharing data, One may be in the middle of updating the data when it is preempted and the second process is } running. The second process may try to read the data which are -corently in an inconsistent state > Dispatcher Another component involved in the CPU-scheduling function is the dispatcher. = Dispatcher modulé gives control of the CPU to ths process selected by the short-term scheduler; this function involves: © Switching context ‘switching to user mode (© jumping to the proper location in the user program to restart that ‘program + Dispatch latency- time it takes for the dispatcher to stop one process and start another running. crn ee \Compiledby Orhlonipayo 8. [Detembes 2019] Updated January 2023 Page 50 {303125 Operating Syremtt 2.2 Scheduling Criteria & Optimization Different CPU-scheduling algorithms have different properties and characteristics and may favor one class of processes aver another, The characteristics used. for comparison can make a substantial difference in the determination of the bets algorithm, ‘The criteria include:- © CPU utilization — keep the CPU as busy as possible (40 % for a lightly loaded system to 90 % for heavily used system) © Maximize CPU utilization ‘Throughput — # of processes that complete their execution per time unit (1 process / hour for long processes, 10 process / second for short transactions) © Maximize throughput ‘© Tumaround time = amount of time it takes to exocute a particular process (from time of submission to time of completion) © Minimize turnaround time ‘© Waiting time~ amount of time a process has been waiting in the ready queue © Minimize waiting time 6 Response time ~ amount of time it takes. from when a request was submitted until the first response is produced, not output (for time-sharing environment) ©. Minimize response time Coupled yy orioegaiye Befoeenber 2019) Upaelfascar 2023 Page St CSC31% Operating System IT 2.3 Scheduling Algorithms ‘There are many different CPU-seheduling algorithms. These & Fitst-Come, First Served Scheduling h. Shoriest-Job-First Scheduling i, Priority Scheduling J. Round-Robin Scheduling k. Multi-level Queue Scheduling 1 Multilevel feedback Queue Scheduling jude: » First-Come, First-Served Scheduling The sisnplest CPU-scheduing algorithm is the first-come, first-served (PCRS) scheduling algorithm, © With this scheme, the process that requests the first. CPU first is allocated the CPU ‘© The implementation of the FCFS policy is easily managed with a FIFO queue, © The average waiting time under the FCFS policy is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds: Process Burst Time Pl 24 P2 3 P3 3 If the processes arrive in the order P,, P2, Ps, and are served in FORS order, we get ‘the result shown in the following Gantt chart Complled by Orblonkpabo B.C [Desenber2039) Updated january 2023 Page 52 €8C312: Operating System tt the waiting time is 0 mnitts, econ for is nd 27 milliseconds for process p Poa on Thus, the average waiting times (0+ 24+27)3 = 17 mil = 17 milliseconds, ifthe processes arrive i ; in the order P;, P,, P,, ip the following Gantt chart: ER ae L 8 however, the results will be as-shown Woiting time for P)= 6, P,=0,P,=3 Average waiting time is mow (6+ 0+3)/3= 3 milliseconds. ‘This reduction is substantial, > Shortest-Job-Fiest Scheduling Associate with cach process the length of its next CPU burst, Use these lengths to schedule the process with the shortest time, ‘Two schemes: © mon-preemptive — once CPU given to the provess it cannot be pre-empted until completes its CPU burst. © preemptive — if a mew process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF), SIF is optimal - gives minimum average waiting time fora given set of processes ‘As an example of SUF scheduling, consider the following st of processes, with the Jength of the CPU burst given in milliseconds: Process Burst Time ry f P, 8 Py ' ipsseepnuary 203 Pages (ample by Orrvonepa 8. [Deceber 208] C312; Operating Syste se pracesses 2C ord ‘Using non-preemptive SF soheduting, we would schedule tes ‘ tothe following Gant chant Py Waiting time: P, =3,P,=16,P, 9, P. Average waiting time = (3 + 16-49 + 0)/4 = 7ms scheme, the average Waiting By comparison, if we were using the FCPS sched! time would be 10.25 ms ; preemptive SIF Consider another example using both preemptive and mon-preemp Process Arrival Time — Burst Time Pl 0 8 P2 1 4 P3 2 a Pa 3 4 Preemptive STF: 1=0 P3:17-2=15 P4:5-3=2 AWT = (94041542) /4= 6.5 ms (i ae Pagel $0312: wating ype Non-Preemptive $yp- Pr GT) WT=0+(8- " r a @ D+02-a4 07-214 fa » Priority Scheduling ‘The SIF algorithm j priority is ssa : me case of the general priority-scheduling With each the highest priority, Process, and the CPU is allocated to the pr 5 © Equal-pririty processes are scheduled in FCFS order © An SIF algorithm is si inverse. aoe is simply a priority algorithm where the griority (p) is the of the (predicted) next CPU burst. The larger the CPU burst, he lower the priority, and vice versa, as an example, consider the following set of processes, assumed to\have arrived #t time O in the order Py, P,,~ + +, Bs. with the length of the CRU ‘arst given in niliseoonds: Process Bust Time Priority . Pr mb 43 v P, L 1 4 Ps 2 4 \ Ps 1 5 ~ Ps 5 2 Using priority scheduling, we would schedule these processes according to t following Gantt chart: pha Agee ot 8 6 AWT=(0+ 146+ 16+ 18)5= =8.2ms Tonpucd by Octnshgaye BC fOecsber2009] Uda laneay 2923 eT gan scatzr opening 3 on prcempive. WRED 8 regs, Priority scheduling it ive OF 2 can be cither preempt var it the priority of the Curren, arrives af the ready queue, its priority is CO™ running proc ig process, itn will preempt the CPU if the * A preemptive priority scheduling #18" than the priority of gy ‘priority of the newly arcived prosess © Bae ‘currently running process. will simply put the + A nonpreemptive priocty setedaling a ™ process atthe head ofthe ready queH® 5 ; A. major problem with priority cecuiee atgrithons 35 indefinite blocking, starvation, © A process that is ready to run but waiting for the CPU can be considerey blocked. © Appriority scheduling algocithm can leave Some Jow priotity processes waiting indefinitely > Round-Robia Scheduling “The round-robin (RR) scheduling gorithm is designed especially for timesharing js added to switch between ime slice), is defined, systems, [is similar to FCFS seheduing, bot preempt processes. A small uit of time, called a time quantum (0 © Theready queue is treated as a eireular quene «@. The CPU scheduler goes around the ready queue, allocating the CPU to each process fori tie interval of up to 1 time quantum. “The average walting time under the RR policy is often long. Consider the folowing set of processes that anive at time 0, with the length of the CPU burst given i milliseconds: Process Burst Time Pl a P2 3 B 3 Copied by Obani. [December 209] 80512: Operating | ‘System It Using Time quantum = 4 ms Tike Gantt chart is show as follows: Waiting time for Pl =p Ye — BH 6 re P3=7 AWT = (64447 3 = 5.66 ms The performance of the RR algorithm depends heavily oa the size of the time- quantum, © If time-quantum is very large (infinite) then RR policy is same a5 FCFS policy. © If time quantum is very small, RR approach is called processor sharing and appears to the users as though each of m process has its own processor running at V/n the speed of real processor, > Muldlevel Queue Scheduling ‘A multilevel queue-scheduling algorithm partitions the ready queue into several separate quewes. © Foreground (interactive) © Background (batch) o The foreground (interactive) queue may have absolute priority over the background (batch) queue ‘These two types of processes have different response-time requirements and so may have different scheduling needs, Each queue has ts own scheduling algorithm —————— ‘Compiled by Orbionkpatyo B.C (December 2019) Updated january 2023 Page 57 seni Opes 186° © foreground queue - RR © background queue - FCFS genertlly based 00 soy ‘The processes are permanently assigned '@ °"* gue property of the process, such as © memory size, ‘©. process priority, oe ‘©. process type Scheduling must be done between the queues © Fixed priory scheduling, (ie, serve lt background). = Danger of starvation. of CPU time which it cay © Time slice ~ each queue gets a certain arout in BR, 20% wove tw foreground iO RR, 2% from foreground then from schedule amongst its processes; i.¢- background in FCFS Example ofa multilevel queue scheduling algorithm with five aue¥s 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5, Student processes Highest priority po t Interactive Processes: i——+ — Interactive editing Processes + — Batch Processes Le ‘Multilevel queue scheduling a ‘Compiled by Orhionipaiyo 8.C.(December 2038) Updated January 2023 PageS8 Lowest priority se C12; Operating Sy sem IT ach queue has rn) . B Absolute priority over owersprioity queve No process in the bs i : batch queue, for example, conld run unless the queues for system cess icractivg es proc = USUN€ processes, and interactive editing processes were all empty: © fan interactive editing * process entered the ready queue while a bateh process fas runni ‘annie, the batch process would be preempted. > Multilevel Feedback Queue Scheduling Multilevel feedback queue scheduling allows a process to move between qUexeS- mv oeess 2 The idea isto separate processes with different CPU-burst characteristics fa process uses to0 much CPU time, it will be moved to 2 lower-priorty queve, a © This scheme leaves U/O-bound and interactive processes in the highet- Priority queues. Similarly, 4 process that waits too long in a lower priority queue may be moved to a higher-priority queue, © This form of aging prevents starvation. For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2 ‘Three queues: Q0- RR with time quantum 8 milliseconds O1-RR time quantum 16 milliseconds (2-FCES ‘Complld by Orhionkpabe B.C [December 2039] Updated Page 59 Multilevel feedback queues. Scheduling: see 3 mil © Anew job enters queue Qy. When it gaits CPU, jo reesives & milizecond, mov IF it exhause 8 milliseconds and does not complet job is moved 10 queys ai. © AtQI the jab receives 16 additional millisee preempted and moved to queue gt. ronds, If it still does not comple, ‘The scheduler first executes all processes in queue 0. Onl when queue O is empy will it execute processes in queue 1. A © Similarly, processes ivquewe 2 will be ‘executed only if queues 0 and") axe empty. co Approcess that aries fic eu 1 will ©. Approcess that arrivés for queue O will in tum, preempt a process in queue preempt a process in queue.2, ‘A multilevel feedback queue scheduler is defined by the following parameters: © The number of queues ‘©. The scheduling algorithm for each queue 1 © The method used to determine when to upgrade a process to a higher priotity queue # ‘© The method used fo determine when to demote a process to a lowerpriosity queue a nt Celle by Oronpaiyo8.C [Decombes 239) Upasetanuary7023 "Page 60 H ‘SCH: Operating System I © The method 4 he! W864 to dcermine which queue a process will ener when that Process needs servicg Exercises Explain the aitr: i L plain the difference between preemptive and non-preemptive scheduling. Si ppese that the following processes arrive for exceution at the times indisted. Each process wi un e 5 for the amount of time listed, In answer questions, u a th i Hse Non-preemptive Scheduling, and base all decisions on the information you have at the time the decision must be made. Process Arrival Time Burst Time PI 0.0 g P 4 4 P3 10 1 4 What is the average tumaround time for these processes with the FCFS scheduling algorithm? b, What is the average tumaround time for these processes with the SIF scheduling algorithm? 3. Why is it important for the scheduler to distinguish 1/-bound programs from: ‘CPU-bound programs? 4. Consider the following set of processes, with the length of the CPU burst given in milliseconds: Process Burst Time Priority Pl 2 2 P2 1 1 PB 8 4 P4 4 2 PS 3 x a nnn ‘Compiled by Ochionkpaiyo B.C. (December 2018] Updated Janvary 2023 Page 61 1 acu opeatne ste" | 1, P2, P3, PA, PS, an ‘The processes ane assumed to ave ative in the art Pp a time 0. . , ese processes Using thy a Draw four Gant charts that iiustrate the execute” cof these P 8 the vi ority (a ke flowing scotuingalntns: FES, SIF- pomreee™ cea Ne oe i = 2): priority number implies a higher preity) and RR (QUAN : | an for each ofthe scheduling algorithms 1b, What is the tumaround time of each proces: inparta? cheduling algorithms? ss for each of these st waiting time (over aj} ¢, What is the waiting time of each process 4, Which of the algorithms results in the minirmort average processes)? a Compiled by Orhionkpalyo B.C. [December 2019] Updated January 2023 Tee ‘CSC312; Operating System I 3. Mutnat Exclusion & $y 3.1 Race Condition, Critic Processes frequently nehronization al Section, and Mutual Exclusion need to communicate with other processes. A wenn Proves Is one that can affest or be aeced by other processes executing in the system, Co, address space (that i, cooperating operating processes can efther directly share @ logical files or mess both code and data or be allowed to share data only through files, or d te However, sometimes « process has to access shared memory Of Mes, or do other ceitisi ag value--; if (S-pvaine < 0) { add this process to $->1ist; block() ; and the signal) semaphore operation can be defined as signal (semaphore *8) { S-valuet+; if (S->value <= 0) { remove a process P from $->list; wakeup(P) ; ‘The block() operation suspends the process that invokes it. The wakeup{P) operation resumes the execution of a blocked process P. These two operations are provided by the operating system as basie system calls. 3.23.2 Problems of Implementing Semaphore with blocking 1. Deadlocks The implementation of a semaphore with a waiting queue may result in a situation where two or more pracesses are waiting indefinitely for an event that can be caused only by one of the waiting processes. The event in question is the —————— Compiled by Ornaniepalyo EC [December2018) Updated fanuary 2023 Page 71

You might also like