The document summarizes research on cache memory and performance issues. It discusses that the purpose of cache memory is to quickly store and access instructions and data that the CPU frequently uses, rather than accessing the slower main memory. The summary then outlines six basic cache optimizations: using a larger block size, larger cache size, and higher associativity can reduce miss rates, while multilevel caches, prioritizing reads over writes, and avoiding address translation during indexing can reduce miss penalties and hit times.
The document summarizes research on cache memory and performance issues. It discusses that the purpose of cache memory is to quickly store and access instructions and data that the CPU frequently uses, rather than accessing the slower main memory. The summary then outlines six basic cache optimizations: using a larger block size, larger cache size, and higher associativity can reduce miss rates, while multilevel caches, prioritizing reads over writes, and avoiding address translation during indexing can reduce miss penalties and hit times.
The document summarizes research on cache memory and performance issues. It discusses that the purpose of cache memory is to quickly store and access instructions and data that the CPU frequently uses, rather than accessing the slower main memory. The summary then outlines six basic cache optimizations: using a larger block size, larger cache size, and higher associativity can reduce miss rates, while multilevel caches, prioritizing reads over writes, and avoiding address translation during indexing can reduce miss penalties and hit times.
The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer's main memory.
Summary: There is 6 Basic Cache Optimizations….
Reducing the Miss Rate
1.Larger Block size (Compulsory misses)
2.Larger Cache size (Capacity misses)
3.Higher Associativity (Conflict misses)
Reducing the Miss Penalty
4. Multilevel Caches
5. Giving Reads Priority over Writes
• E.g. Read complete before earlier writes in write buffer
Reducing the time to hit in the cache
6. Avoiding Address Translation during Cache Indexing
• E.g., Overlap TLB and cache access, Virtual Addressed Caches
Chapter 05 Computer Organization and Design, Fifth Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design) 5th Edition