Professional Documents
Culture Documents
Accessing an Item
To locate a word of memory for access using a data cache, for example, we must first determine the line number for that word, and see if the line is in the cache. To do this a search must be made of the tags. To support a fast search, the cache is structured as an associative memory, with circuitry that allows an immediate determination of whether a particular tag is present, and a readout of its contents if present. If this test fails, the item is fetched in the normal way from memory, and its line is added to the cache (see below). Since a true associative search of a large cache may be infeasible, the cache often is organized as a set associative memory in which the cache is broken into a number of smaller caches called sets. Each set is an independent cache for a portion of the address space (or page/segment numbers). The sets, in turn, with only a few registers each, are organized as fully associative memories.
JDM 3/20/02
1.
Multilevel Caches
Memory today is very inexpensive, and becoming increasingly large. Despite the principle of locality, a cache will not function effectively if its size is many orders of magnitude smaller than the memory it is buffering. A natural solution to this problem is to make caches larger as well, perhaps on the order of megabytes instead of kilobytes. Such a cache may be able to hold a sufficient range of information, but is too large to be managed effectively and accessed quickly. We now need a cache for the cache. This trend leads to the use of multilevel caches: a small cache on the processor chip, and a larger cache on a separate chip nearby. These are often called the Level 1 (L1) cache and the Level 2 (L2) cache, respectively.
JDM 3/20/02
2.
JDM 3/20/02
3.
METHOD OF IDENTIFICATION Extract page and segment number from virtual address METHOD OF ACCESS Fetch descriptor from paging cache or index into page table; Use desciptor to access page IF NOT FOUND Issue page fault, update and retry HOW UPDATED By operating system WHEN UPDATED On reference or before WRITE STRATEGY Write-back REPLACEMENT STRATEGY MULTIPLE LEVELS By operating system; May be complex Not feasible
Fetch from main, update cache By hardware On reference Write-through or write-back By hardware; FIFO or LRU Often
Fetch from main, update cache By hardware On reference Write-through or write-back By hardware; FIFO or LRU Rarely
JDM 3/20/02
4.