Professional Documents
Culture Documents
3
CACHE PERFORMANCE
When the processor needs to read or write a location in main memory, it first checks for a
corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has
occurred and data is read from cache
If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data from
main memory, then the request is fulfilled from the contents of the cache.
We can improve Cache performance using higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.
CACHE LINES
Cache memory is divided into equal size partitions called as cache lines.
While designing a computer’s cache system, the size of cache lines is an important
parameter.
The size of cache line affects a lot of parameters in the caching system.
The following results discuss the effect of changing the cache block (or line) size in a caching
system.
The larger the block size, better will be the spatial locality.
Explanation-
Result-02: Effect of Changing Block Size on Cache Tag in Direct Mapped Cache-
In direct mapped cache, block size does not affect the cache tag anyhow.
Explanation-
Example-
Example-
Result-03: Effect of Changing Block Size on Cache Tag in Fully Associative Cache-
In fully associative cache, on decreasing block size, cache tag is reduced and vice versa.
Explanation-
Decreasing the block size decreases the number of bits in block offset.
With the decrease in number of bits in block offset, number of bits in tag increases.
Increasing the block size increases the number of bits in block offset.
With the increase in number of bits in block offset, number of bits in tag decreases.
Result-04: Effect of Changing Block Size on Cache Tag in Set Associative Cache-
In set associative cache, block size does not affect cache tag anyhow.
Explanation-
Example-
Example-
Explanation-
When a cache miss occurs, block containing the required word has to be brought from
the main memory.
If the block size is small, then time taken to bring the block in the cache will be less.
Hence, less miss penalty will incur.
But if the block size is large, then time taken to bring the block in the cache will be
more.
Hence, more miss penalty will incur.
Explanation-
Cache hit time is the time required to find out whether the required block is in cache
or not.
It involves comparing the tag of generated address with the tag of cache lines.
Smaller is the cache tag, lesser will be the time taken to perform the comparisons.
Hence, smaller cache tag ensures lower cache hit time.
On the other hand, larger is the cache tag, more will be time taken to perform the
comparisons.
Thus, larger cache tag results in higher cache hit time.
In designing a computer’s cache system, the cache block or cache line size is an important
parameter. Which of the following statements is correct in this context?
Reasons-
In direct mapped cache and set associative cache, there is no effect of changing block
size on cache tag.
In fully associative mapped cache, on decreasing block size, cache tag becomes
larger.
Thus, smaller block size does not imply smaller cache tag in any cache organization.
“A smaller block size implies a larger cache tag” is true only for fully associative
mapped cache.
Larger cache tag does not imply lower cache hit time rather cache hit time is
increased.
Right-click on the Start button and click on Task Manager. 2. On the Task Manager screen,
click on the Performance tab > click on CPU in the left pane. In the right-pane, you will see
L1, L2 and L3 Cache sizes listed under “Virtualization” section.
The size of these chunks is called the cache line size. Common cache line sizes are 32, 64 and
128 bytes. A cache can only hold a limited number of lines, determined by the cache size. For
example, a 64 kilobyte cache with 64-byte lines has 1024 cache lines.
cache block – The basic unit for cache storage. May contain multiple bytes/words of data.
Because different regions of memory may be mapped into a block, the tag is used to
differentiate between them. valid bit – A bit of information that indicates whether the data in
a block is valid (1) or not (0).
In the example the cache block size is 32 bytes, i.e., byte-addressing is being used; with four-
byte words, this is 8 words. As you can see there are four hits out of 12 accesses, so the hit
rate should be 33%.
A cache memory has a line size of eight 64-bit words and a capacity of 4K words. A cache
memory has a line size of eight 64-bit words and a capacity of 4K words.
Cache size, Block size, Mapping function, Replacement algorithm, and Write policy. These
are explained as following below.
1. Cache Size:
It seems that moderately tiny caches will have a big impact on performance.
2. Block Size:
Block size is the unit of information changed between cache and main memory. As
the block size will increase from terribly tiny to larger sizes, the hit magnitude
relation can initially increase as a result of the principle of locality. the high chance
that knowledge within the neck of the woods of a documented word square measure
possible to be documented within the close to future. As the block size increases, a
lot of helpful knowledge square measure brought into the cache.
a. The hit magnitude relation can begin to decrease, however, because the
block becomes even larger and also the chance of victimization the new
fetched knowledge becomes but the chance of reusing the information that
ought to be abstracted of the cache to form area for the new block.
3. Mapping Function:
When a replacement block of data is scan into the cache, the mapping performs
determines that cache location the block will occupy. Two constraints have an effect
on the planning of the mapping perform. First, once one block is scan in, another
could be replaced.
a. We would wish to do that in such the simplest way to minimize the chance
that we are going to replace a block which will be required within the close
to future. A lot of versatile the mapping performs, a lot of scopes we’ve to
style a replacement algorithmic rule to maximize the hit magnitude relation.
Second, a lot of versatile the mapping performs, a lot of advanced is that the
electronic equipment needed to look the cache to see if a given block is
within the cache.
4. Replacement Algorithm:
If the contents of a block within the cache square measure altered, then it’s
necessary to write down it back to main memory before exchange it. The written
policy dictates once the memory write operation takes place. At one extreme, the
writing will occur whenever the block is updated.
a. At the opposite extreme, the writing happens only if the block is replaced.
The latter policy minimizes memory write operations however leaves the
main memory in associate obsolete state. This can interfere with the
multiple-processor operation and with direct operation by I/O hardware
modules.
References
Reference Books:
Stallings, W., “Computer Organization and Architecture”, Eighth Edition, Pearson Education.
Text Books:
https://www.gatevidyalay.com/cache-line-cache-line-size-cache-memory/
https://www.geeksforgeeks.org/cache-memory-in-computer-organization/
https://stackoverflow.com/questions/8107965/concept-of-block-size-in-a-cache