You are on page 1of 12
ONLINE © JAIN Self-Learning Material Program: MCA Specialization: All Semester: Course Name: Computer Organization & Architecture Course Code: 21VMCOC105 Unit Name: Introduction to Cache & Virtual Memory Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp>@gmail.com only. Sharing or publishing the contents in part or full s liable for legal action, UNIT 5 Table of Contents Overview Objectives Learning Outcomes Pre-Unit Preparatory Material 1.1 Introduction 1.2 Mapping Function 1.3 Replacement Algorithms 1.4 Performance Considerations 15 Virtual Memory 1.6 Virtual Memory Organization 1.7 Address Translation 1.8 Conclusion Glossary References This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full s liable for legal action, 10 v7 UNIT 5 INTRODUCTION TO CACHE & VIRTUAL MEMORY Introduction, Mapping Functions, Replacement Algorithms, Performance Considerations. Virtual Memory: Introduction, Virtual Memory Organization, Address Translation. Learning Outcome: Establish insight on the concepts and applications of Cache Virtual Memory mechanism operation, instructions and instruction sequencing. © Overview The computer memory system is studied in this part. No data can be stored or recovered in a machine without a memory. Processes share the CPU time and the main storage space for a multi« Static tasking operating system. Cache memory is a tiny high-speed memory that usually contaii RAM (SRAM) with the most frequently accessed key memory bits, There are high-speed buffers between processors and the main memory to catch those parts of the main storage contents currently in use, © Objectives © Present a computer memory system overview. ‘© Describe the basic concepts of cache memory & virtual memory. Explain the reasons for using multiple levels of cache. ‘© Distinguish among direct mapping set-associative ‘mapping, and associative mapping, © To define and compare Virtual Memory Organization, Learning Outcomes ¥ Demonstrate principles relating to the nature of modem processors, memories, and /Os in computer architecture. ¥ Evaluate the output of computers available commercially. ¥ To build logic for programming assembly languages Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or ful is liable for legal action, © Pre-Unit Preparatory Material "Computer System Architecture," Mano M Mortis, “Fundamentals of computer organization and architecture, “A JOHN WILEY & SONS, 1.1 Introduction In 1965 Wilkes could have been added to Cache's memory. Wilkes distinguished two major forms of memory at the time: The slave and the conventional. A second high-speed memory in the Wilkes word is a slave's memory, which is now the same as the Memory of cache (Term cache is a secure hiding place or storing things). Its a matter of maintaining information that can be used more often by the CPU as the first step of the memory hierarchy in the Cache (a small, quick, near the CPU memory). The ultimate result is that the cache duplicates an active part of the main memory at any given time. Thus, the request is first searched for in the cache when a processor requests a memory reference. If the request is an item that resides in a cache at the time, we call it a cache hit. The basic Case of the eight main memory modules is seen in Figure 5.1. In this ease, the block is considered to be 8 bytes. Regarding an intro to the basic theory of cache memory, we want to evaluate the effect of time and space on the performance of the hierarchy of memory. We restrict our considerations to render this Evaluation in the basic case with only two layers of hierarchy, ive. cache and main storage. MOM, Ms My i Byte aa Main memory Block cache. — Figure 5.1. Memory interleaving using eight modules Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, 1.2Mapping Function We present the che-mapping function without losing its generality, the interface between two successive memory hierarchy stages is taken into account, main and secondary levels, The cache is the main stage, whereas the main memory is the Secondary level when the device is focused between the main memory and the cache. The same rules refer to the interface of any issues linked to memory tiers, In the discussion below, we focus on the main storage cache interface. The processor allows a request to enter a memory clement by emitting the address of the element demanded. The address that did come from the processor might be an entity that is present in the index (cache hit). Otherwise, the address may be an element that remains in the main ‘memory at the moment. Address translation must then be carried out in the location of the element requested is calculated by order. This is one of the memory protection unit's tasks (MMU). Figure 5.2 provides a diagram for the mapping of addresses. Secondary Memory Management Level Unit (MMU) iy ee Translation: Block system Addressin a > \e Primary Level << ___ Figure 5.2. Address mapping operation Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, The device address in this figure is the address for the requested device element provided by the processor. This address is used in the MMU for an address conversion feature. If address translation shows that the address given is an element in the cache at the moment, the obj rendered accessible to the processor. In other words, If the element is not in a cache, it will then be taken from the main memory as part of the block and inserted into the cache. For cache memory, there are three main organizing techniques. Below are three methods. The above. Two main aspects of these methods vary: 1. The criterion used to position an input main cache storage block. 2. The criteria used to replace an incoming block for a cache (on cache full) 1) Direct Mapping This is the easiest of the three methods. It's simple when an incoming key memory block is put in a certain Set location of the cache block. The assignment takes place based on an interaction with the amount of the block input, I cache block, j, and cache block figures; N. J=-lmod N 2) Fully Associative Mapping This technique makes the entry of a key memory block in every cache block available. There are only two fields needed for the Processor’s address issued. Here are the fields of Tag and Word. The first block in the cache is recognized. In the block which the processor the second field defines the entity. Small & portable PC models. This MMU. requests, interprets the processor's address in two places as shown in figure 5.3. For each area in Figure 5.3 the duration in bits is clarified: 1. Field Word = loge B, where B is the block size in words jeld Tag = log: M, where M is the main block memory size 3. The number of bits in the main memory address = loge (B < M) Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, <—————_ Main Memory Address —______» + Tag Field ————_» € Word Field—> Figure 5.3, Associative-mapped address fields 3) Set-Associative Mapping The cache is split into a variety of sets in the set-associative mapping technique. There are multiple blocks in each package. A unique main cache memory block is maps centered on the S=i mod equation S.$ is the number of the sets of caches, ‘I’ also the main number of the block of the cache. The processor's address is split into three different regions. These are the fields of Tag, Set, and Term. A cache set is used in the Set field, which should preferably include the block. The Tag field can be divided into a package into a target block only. The Word field shows the entity (phrase) in the block required by the processor. In Figure 5.4, By splitting the processor into three regions, the MMU interprets the processor's address. Each area in Figure 5.4 is given in bits by length. The cache is split into a variety of sets in the set-associative machine vision. There are several blocks for each package. A provided main cache Block maps a cache based on a formula of s = I mod S, with S becoming the number of cache sets and I being the number of the main cache. The processor's address is broken into three different fields. That's the fields of Tag, Set, and Term. The field is used for setting up a cache set that should contain the block preferably. Only the target block can be identified in the field. The Word field shows the entity (word) in the block required by the processor. In Figure 5.4 the MMU interprets the processor's address in 3 categories regions. Each area in Figure 5.4 is given in bits by length. Field Word = logo B, where B is the block size in words 2, Set field = log. S, in which $ is the number of cache sets Field Tag ~ log2 (M/S), where M is the block size of the main memory. $ = N/Bz, with N being the cache block number, and Bs the blocks count by each set. 4, The number of bits in the main memory address = loge (B x M) Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, <+—_——— Main Memory Address ————_—__p» Tag Field Set Field Word Field Figure 5.4, Set ssociative-mapped address fields 1.3 Replacement Algorithms There will be several altemative techniques. This involves the block that was selected by random selection (random selection), The longest block in the cache (FIFO), the least used to block in the cache (least recently used, LRU) TABLE 5.1. Cache Mapping techniques Qualitative Comparison Expected Mappine Associative cache Replacement technique Simplicity tag search utilization technique Direct None’ Low Not needed Associative No Involved High Yes Set-assciatve Moderate Moderate Moderate Yes The FIFO software takes An alternative measure in the cache block. The most popular block in the cache can be substituted regardless of the recent access pattern. This strategy allows the existence of a cache block to be monitored. Subsequently, the strategy of random selection is not as easy. The FIFO methodology is intuitive to be applied in simple systems in which the reference point is not a problem. ‘The LRU replacement strategy and the least utilized cache block are chosen for replacement. The LRU technique is the most powerful of the three substitution techniques. A cache controller circuit is required for the LRU algorithm. which tracks all blocks during their cache residen Several possible implementations can do this. The usage of counters is one of these implementations. In this scenario, a counter is assigned to a cache cube. When users hit a cache, it is set to 0, for all other inches less than the initial amount increased by 1. The contents of the block are set to 1. If a cache is missing, the block that has the highest value counter is set 0, as well as all counters, are up by | Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, 1.4 Performance Considerations 1) The quality of data processing is increased. 2) It stores instructions for the program and data used repeatedly for the operation of programs or information that is possibly needed next for the CPU. 3) This information is available from the cache more easily than from the main memory, from the machine processor. 4) The average software pace is increased by easy access to these instructions, 5) Cache hits are cases where the device retrieves cache data successfully. 6) When the system scans the database data, a cache miss has been unable to locate the data and seeks else. Often, by changing the block size for the Cache memory - the size of the stored data drives, the user can increase the hit-miss ratio. 7) Improved efficiency and performance monitoring capability does not only improve user comfort 8) Depending on the situation, a few milliseconds of lag might contribute to huge expenses. ‘tual Memory In some large computer systems, virtual memory is a term that allows the user to build programs as if there are vast spaces of memory, equal to an auxiliary memory total amount, Every address referred to by the CPU is a map from the so-called virtual to a physical address in the principal address. Virtual memory gives programmers the illusion that they have a rather big memory, even though the machine has a very limited main memory. A virtual memory system includes a system to translate software addresses into the right memory locations. Dynamically 1! is achieved and during the operation of the software in the CPU. The hardware processes the translating or mapping automatically through a mapping table ‘The rules are the same as those used in cache memory in the virtual memory design. The main principle is that the fast main memory retains the active segments and inactive segments to the hard disc. 1.6 Virtual Memory Organization The Virtual Memory gives a computer programmer a variety of times higher space than the currently accessible main memory address space. The details and instructions in this space are placed using virtual addresses that can be viewed in any way as artificial. Both the Memory main and memory auxiliary store data and instructions (usually disc memory). Its carried out under the control of the Virtual memory, which regulates the actual placement by virtual addresses of the records. This framework fetches the key memory and instructions requested by currently running Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, software (i.e. without any action of programming programmers). A figure shows the overall arrangement of the virtual memory. Operating system ixchange Address Main | oe Processor translator “) memors ee y (disk) Virtual Physical (disk address Virtual mem.) address control Figure 5.5. Virtual Memory General Scheme 1.7 Address Translation Space is divided into pieces for virtual memory. ofa collection of fragments of virtual memory which are compatible with such fragments and have predetermined sizes and identifiers. Based on the virtualized form applied, the sequences of virtual storage sites The pages or segments referred to in these fragments are called. The amount of fragment number of virtual memory and fragment number of word or byte specified is composed of a virtual memory address. For current systems of virtual memory, we distinguish: 1. paged (virtual) memory 2. segmented (virtual) memory 3. segmented (virtual) memory with paging 1) Paged virtual memory The Word and Page Number (byte) Movements of the virtual address in the pages are split into two parts: (offset). On each page, the number of motions is set (bytes). It's a 2-might. For a particular space for virtual memory, the main memory contains a page table, In this table, each page is identified by a descriptor page. The Page descriptor Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, physical address is the first page to be found, It is a key memory address or an auxiliary store, 2) Segmented virtual memory Segment lengths and address spaces have their IDs, Segmentation includes sequential data sequences or commands. Some users also determined their ownership of the segments and usage rights. Segmentation is a way to maximize usable space, but it is, also a way of arranging organized structures with specified rights of access and segment security on a device used by many users, 3) Segmented paged virtual memory Virtual memory is split into a third form of virtual memory charging. There are several pages this memory segment that implies are set by a compiler or programmer. A section number virtual address in the memory, A word or byte offset number for a page on a page, 1.8 Conclusion ‘The graph was used to define the basic cash and virtualization structure. Three cache map- based approaches were assessed and their output measures were compared namely the direct, associative, and set-associative mappings. The Random, FIFO, and LRU replacement techniques were also introduced. The impact on the cache hit ratio was evaluated by all three techniques. Our virtual memory discussion began with the queries of translation. Three techniques for address translation were discussed and compared. © Glossary 1, Instruction Sequeneing — The sequence of the instructions in a program. Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action, 2, Bus Structure - The communication device transfers data either inside computers or within computers between components. ASCII - American standard committee information interchange. Memory - it refers to the portion that enables short-term access to data on your computer. © References 1. hutps://www.edutechleamers.com/download/Notes/CAO. pdf 2. http://www.mhhe.com/enges/electrical/hamacher/Se/graphies/ch02_025-102.pdf Proprietary content. All rights reserved. Unauthorized use or distribution prohibited This file is meant for personal use by nandinibj.hp@gmail.com only. ‘Sharing or publishing the contents in part or full is liable for legal action,

You might also like