MEMORY MANAGEMENT Address Binding Usually, a program resides on a disk as a binary executable file.

To be executed, the program must be brought into memory and placed within a process. Depending on the memory management in use, the process may be moved between disk and memory during its execution. The processes on the disk that are waiting to be brought into memory for execution form the input queue. The normal procedure is to select one of the processes in the input queue and to load that process into memory. As the process is executed, it accesses instructions and data from memory. Eventually, the process terminates, and its memory space is declared available. Most systems allow a user process to reside in any part of the physical memory. Addresses may be represented in different ways during these steps. Addresses in the source program are generally symbolic (such as count). A compiler will typically bind these symbolic addresses to relocatable addresses (such as "14 bytes from the beginning of this module"). The linkage editor or loader will in turn bind the relocatable addresses to absolute addresses (such as 74014). Each binding is a mapping from one address space to another. • Compile time:If you know at compile time where the process will reside in memory, then absolute code can be generated. For example, if you know that a user process resides starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code. The MS-DOS .COM-format programs are bound at compile time. Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. In this case, final binding is delayed until load time. If the starting address changes, we need only reload the user code to incorporate this changed value. Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. Special hardware must be available for this scheme to work. Most general-purpose operating systems use this method.

Logical versus Physical Address Space An address generated by the CPU is commonly referred to as a logical address whereas an address seen by the memory unit-that is. Thus. However. the one loaded into the memory address register of the memory-is commonly referred to as a physical address. . the execution-time address binding scheme results in differing logical and physical addresses. In this case. in the execution-time address-binding scheme. The set of all logical addresses generated by a program is a logical-address space. The compile-time and load-time address-binding methods generate identical logical and physical addresses. we usually refer to the logical address as a virtual address. the logical and physical address spaces differ. We use logical address and virtual address interchangeably in this text. The run-time mapping from virtual to physical addresses is done by a hardware device called the memory management unit (MMU). the set of all physical addresses corresponding to these logical addresses is a physical address space.

ready to execute. In the meantime. The program can create a pointer to location 346. and compare it with other addresses-all as the number 346. The value in the relocation register is added to every address generated by a user process at the time the address is sent to memory (see Figure 8. if the base is at 14000. In addition. For example.The base register is now called a relocation register. The memory-mapping hardware converts logical addresses into physical addresses. Only when it is used as a memory address (in an indirect load or store. The user program never sees the real physical addresses. an access to location 346 is mapped to location 14346. Ideally. can be swapped temporarily out of memory to a backing store and then brought into memory for continued execution. store it in memory. . then an attempt by the user to address location 0 is dynamically relocated to location 14000. The user program deals with logical addresses.4). The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. When each process finishes its quantum. perhaps) is it relocated relative to the base register. the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed. For example. when the CPU scheduler wants to reschedule the CPU. Swapping: A process must be in memory to be executed. the CPU scheduler will allocate a time slice to some other process in memory. assume a multiprogramming environment with a round-robin CPU-scheduling algorithm. When a quantum expires. A process. however. manipulate it. the memory manager can swap processes fast enough that some processes will be in memory. the quantum must be large enough to allow reasonable amounts of computing to be done between swaps. it will be swapped with another process.

Swapping is constrained by other factors as well. Normally. If we want to swap a process. Contiguous memory locations: The main memory must accommodate both the operating system and the various user processes. the I/0 operation might then attempt to use memory that now belongs to process P2 . The memory is usually divided into two partitions: one for the resident operating system and one for . If binding is done at assembly or load time. There are two main solutions to this problem: never swap a process with pending I/0. Of particular concern is any pending I/0. then the process cannot be easily moved to a different location. If execution-time binding is being used. This variant of swapping is sometimes called roll out. then a process can be swapped into a different memory space. because the physical addresses are computed during execution time. If we were to swap out process P1 and swap in process P2. We therefore need to allocate main memory in the most efficient way possible. However. the lowerpriority process can be swapped back in and continued. we must be sure that it is completely idle. Transfers between operating-system buffers and process memory then occur only when the process is swapped in. A process may be waiting for an I/0 operation when we want to swap that process to free up memory. if the I/0 is asynchronously accessing the user memory for I/0 buffers. the memory manager can swap out the lower-priority process and then load and execute the higher-priority process. or execute I/0 operations only into operating-system buffers. If a higher-priority process arrives and wants service. roll in. however. When the higher-priority process finishes. a process that is swapped out will be swapped back into the same memory space it occupied previously.A variant of this swapping policy is used for priority-based scheduling algorithms. Assume that the I/0 operation is queued because the device is busy. then the process cannot be swapped. This restriction is dictated by the method of address binding.

We can stop searching as soon as we find a free hole that is large enough. The first-fit. When a process arrives and needs memory. and is considered as one large block of available memory. which is then placed back in the set of holes. all memory is available for user processes. and worst-fit strategies are the ones most commonly used to select a free hole from the set of available holes. Memory Mapping and Protection: We can provide memory mapping and protection by using a relocation register. which is how to satisfy a request of size n from a list of free holes. best-fit. In contiguous memory allocation. each process is contained in a single contiguous section of memory. One part is allocated to the arriving process. If the hole is too large. A set of holes of various sizes is scattered throughout memory. Memory allocation: Initially. the limit register contains the range of logical addresses (for example. • Best fit: Allocate the smallest hole that is big enough. a hole. With relocation and limit registers. The major factor affecting this decision is the location of the interrupt vector.the user processes. relocation= 100040 and limit= 74600). Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. The relocation register contains the value of the smallest physical address. We must search the entire list. together with a limit register. • First fit: Allocate the first hole that is big enough. the system searches the set for a hole that is large enough for this process. When a process terminates. programmers usually place the operating system in low memory as well. unless the . The set of holes is searched to determine which hole is bet to allocate. This procedure is a particular instance of the dynamic storage allocation problem. it releases its block of memory. it is split into two parts. the MMU maps the logical address dynamically by adding the value in the relocation register. We can place the operating system in either low memory or high memory. Since the interrupt vector is often in low memory. each logical address must be less than the limit register. the other is returned to the set of holes. This mapped address is sent to memory.

With this approach. one-third of memory may be unusable! This property is known as the 50-percent rule. One solution to the problem of external fragmentation is compaction. In the worst case. Statistical analysis of first fit. Worst fit: Allocate the largest hole. The goal is to shuffle the memory contents so as to place all free memory together in one large block. Another solution to external fragmentation problem is to permit the logical address space of a process to be non-contiguous. unless it is sorted by size. reveals that. the free memory space is broken into little pieces. This strategy produces the smallest leftover hole. but first fit is generally faster. The difference between these two numbers is internal fragmentation-memory that is internal to a partition. Again. Simulations have shown that both first fit and best fit are better than worst fit in terms of decreasing time and storage utilization. even with some optimization. . That is. The general approach to avoiding this problem is to break the physical memory into fixed-sized blocks and allocate memory in units based on block size. external fragmentation may be a minor or a major problem. As processes are loaded and removed from memory. thus allowing a process to be allocated physical memory wherever the latter is available. Fragmentation: Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation. Physical memory is broken into fixed sized blocks called frames. Depending on the total amount of memory storage and the average process size. This strategy produces the largest leftover hole. If all these small pieces of memory were in one big free block instead. for instance. its pages are loaded into available memory frames from the backing store. The page table contains the base address of each page in physical memory. If relocation is static and is done at assembly or load time. Paging can used to achieve this solution. compaction is possible only if relocation is dynamic and is done at execution time. storage is fragmented into a large number of small holes. External fragmentation exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous. another 0. Logical memory is broken into blocks of same size called pages. the memory allocated to a process may be slightly larger than the requested memory. When a process is to be executed. Neither first fit nor best fit is clearly better than the other in terms of storage utilization.5 N blocks will be lost to fragmentation. Every address generated by the CPU is divided into two parts: a page number(p) and a page offset(d). Compaction is not always possible. This fragmentation problem can be severe. we might be able to run several more processes. given N allocated blocks.• list is ordered by size. which may be more useful than the smaller leftover hole from a best-fit approach. we must search the entire list. PAGING: Paging is a memory management scheme that permits the physical address space of a process to be non-contiguous. The page number is used as an index into the page table. we could have a block of free (or wasted) memory between every two processes. compaction cannot be done. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. however.