This might take a while because of the applet loading!!! Interaction Policies with Main Memory Reads dominate processor cache accesses. All instruction accesses are reads, and most instructions do not write to memory.
Cache Write Policies Introduction: One of two things will happen: But eventually, the data makes its way from some other level of the hierarchy to both the processor that requested it and the L1 cache. The L1 cache then stores the new data, possibly replacing some old data in that cache block, on the hypothesis that temporal locality is king and the new data is more likely to be accessed soon than the old data was.
Throughout this process, we make some sneaky implicit assumptions that are valid for reads but questionable write allocate writes. We will label them Sneaky Assumptions 1 and 2: If the access is a miss, we absolutely need to go get that data from another level of the hierarchy before our program can proceed.
Why these assumptions are valid for reads: Write allocate data into the L1 or L2, or whatever just means making a copy of the version in main memory.
If we lose this copy, we still have the data somewhere. If the request is a load, the processor has asked the memory subsystem for some data. In order to fulfill this request, the memory subsystem absolutely must go chase that data down, wherever it is, and bring it back to the processor.
Why these assumptions are questionable for writes: We would want to be sure that the lower levels know about the changes we made to the data in our cache before just overwriting that block with other stuff. So the memory subsystem has a lot more latitude in how to handle write misses than read misses.
Keeping Track of Modified Data More wild anthropomorphism ahead As requested, you modify the data in the appropriate L1 cache block. Now your version of the data at Address XXX is inconsistent with the version in subsequent levels of the memory hierarchy L2, L3, main memory Since you care about preserving correctness, you have only two real options: You and L2 are soulmates.
Inconsistency with L2 is intolerable to you. To deal with this discomfort, you immediately tell L2 about this new version of the data. You have a more hands-off relationship with L2. Your discussions are on a need-to-know basis. You quietly keep track of the fact that you have modified this block.
Write-Through Implementation Details naive version With write-through, every time you see a store instruction, that means you need to initiate a write to L2.
This is no fun and a serious drag on performance. Write-Through Implementation Details smarter version Instead of sitting around until the L2 write has fully completed, you add a little bit of extra storage to L1 called a write buffer.
If the write buffer does fill up, then, L1 actually will have to stall and wait for some writes to go through.
Instead, we just set a bit of L1 metadata the dirty bit -- technical term! So everything is fun and games as long as our accesses are hits. Whenever we have a miss to a dirty block and bring in new data, we actually have to make two accesses to L2 and possibly lower levels: One to let it know about the modified data in the dirty block.
Another to fetch the actual missed data. What this means is that some fraction of our misses -- the ones that overwrite dirty data -- now have this outrageous double miss penalty. You get a write request from the processor.April 28, Cache writes and examples 4 Write-through caches April 28, Cache writes and examples 10 Allocate on write An allocate on write strategy would instead load the newly written data into the cache.
If that data is needed again soon, it will be available in the cache. Nov 10, · This video describes policies for handling writes to caches including write through vs. write back and write allocate vs. write around. The types of locality are Letter Answer A Punctual, tardy B Spatial and Temporal C Instruction and data D Write through and write back E Write allocate and no-write allocate.
As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.
Write allocate (also called fetch on write): data at the missed-write location is loaded to cache, followed by a write-hit operation.
In this approach, write misses are similar to read misses. In this approach, write misses are similar to read misses. no-write-allocate policy, when reads occur to recently writ- ten data, they must wait for the data to be fetched back from a lower level in the memory hierarchy.