List down ten advanced optimizations for cache performance
The eduladder is a community of students, teachers and programmers just interested to make you to pass any exams. Because we believe that you can do more than that of just passing exams. What else we do?
Our team
Wondering how this works?
Want to pass exams? or want to help students to pass?

Ask a Question

You are here:-->View note-->View question

List down ten advanced optimizations for cache performance.

List down all.

By:scribed

Taged users:
|prasanna|Naveen-Kumar|edu-Freak|Aishwarya-shetty-s|Anagha|manjarimattur|PerumalS|electron|eduladder

Likes:
Be first to like this question

Dislikes:
Be first to dislike this question
Talk about thisDelete|Like|Dislike|


Answers

First Optimization: Small and Simple First-Level Caches to Reduce Hit Time and Power
The pressure of both a fast clock cycle and power limitations encourages limited size for first-level caches. Similarly, use of lower levels of associativity can reduce both hit time and power, although such trade-offs are more complex than those involving size.
Second Optimization: Way Prediction to Reduce Hit Time
Another approach reduces conflict misses and yet maintains the hit speed of direct-mapped cache. In way prediction, extra bits are kept in the cache to predict the way, or block within the set of the next cache access. This prediction means the multiplexor is set early to select the desired block, and only a single tag comparison is performed that clock cycle in parallel with reading the cache data. A miss results in checking the other blocks for matches in the next clock cycle.
Third Optimization: Pipelined Cache Access to Increase Cache Bandwidth
This optimization is simply to pipeline cache access so that the effective latency of a first-level cache hit can be multiple clock cycles, giving fast clock cycle time and high bandwidth but slow hits. For example, the pipeline for the instruction cache access for Intel Pentium processors in the mid-1990s took 1 clock cycle, for the Pentium Pro through Pentium III in the mid-1990s through 2000 it took 2 clocks, and for the Pentium 4, which became available in 2000, and the current Intel Core i7 it takes 4 clocks. This change increases the number of pipeline stages, leading to a greater penalty on mispredicted branches and more clock cycles between issuing the load and using the data (see Chapter 3), but it does make it easier to incorporate high degrees of associativity.
Fourth Optimization: Nonblocking Caches to Increase Cache Bandwidth
For pipelined computers that allow out-of-order execution (discussed in Chapter 3), the processor need not stall on a data cache miss. For example, the processor could continue fetching instructions from the instruction cache while waiting for the data cache to return the missing data. A nonblocking cache or lockup-free cache escalates the potential benefits of such a scheme by allowing the data cache to continue to supply cache hits during a miss. This “hit under miss” optimization reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the processor. A subtle and complex option is that the cache may further lower the effective miss penalty if it can overlap multiple misses: a “hit under multiple miss” or “miss under miss” optimization. The second option is beneficial only if the memory system can service multiple misses; most high-performance processors (such as the Intel Core i7) usually support both, while lower end processors, such as the ARM A8, provide only limited nonblocking support in L2.
Fifth Optimization: Multibanked Caches to Increase Cache Bandwidth
Rather than treat the cache as a single monolithic block, we can divide it into independent banks that can support simultaneous accesses. Banks were originally used to improve performance of main memory and are now used inside modern DRAM chips as well as with caches. The Arm Cortex-A8 supports one to four banks in its L2 cache; the Intel Core i7 has four banks in L1 (to support up to 2 memory accesses per clock), and the L2 has eight banks.
Sixth Optimization: Critical Word First and Early Restart to Reduce Miss Penalty
This technique is based on the observation that the processor normally needs just one word of the block at a time. This strategy is impatience: Don’t wait for the full block to be loaded before sending the requested word and restarting the processor. Here are two specific strategies:
critical word first - Request the missed word first from memory and send it to the processor as soon as it arrives; let the processor continue execution while filling the rest of the words in the block.
Early restart - Fetch the words in normal order, but as soon as the requested word of the block arrives send it to the processor and let the processor continue execution
Seventh Optimization: Merging Write Buffer to Reduce Miss Penalty
Write-through caches rely on write buffers, as all stores must be sent to the next lower level of the hierarchy. Even write-back caches use a simple buffer when a block is replaced. If the write buffer is empty, the data and the full address are written in the buffer, and the write is finished from the processor’s perspective; the processor continues working while the write buffer prepares to write the word to memory. If the buffer contains other modified blocks, the addresses can be checked to see if the address of the new data matches the address of a valid write buffer entry. If so, the new data are combined with that entry. Write merging is the name of this optimization. The Intel Core i7, among many others, uses write merging.
Eighth Optimization: Compiler Optimizations to Reduce Miss Rate
Thus far, our techniques have required changing the hardware. This next technique reduces miss rates without any hardware changes.
This magical reduction comes from optimized software - the hardware designer’s favorite solution! The increasing performance gap between processors and main memory has inspired compiler writers to scrutinize the memory hierarchy to see if compile time optimizations can improve performance. Once again, research is split between improvements in instruction misses and improvements in data misses. The optimizations presented below are found in many modern compilers.
Ninth Optimization: Hardware Prefetching of Instructions and Data to Reduce Miss Penalty or Miss Rate
Nonblocking caches effectively reduce the miss penalty by overlapping execution with memory access. Another approach is to prefetch items before the processor requests them. Both instructions and data can be prefetched, either directly into the caches or into an external buffer that can be more quickly accessed than main memory.
Tenth Optimization: Compiler-Controlled Prefetching to Reduce Miss Penalty or Miss Rate An alternative to hardware prefetching is for the compiler to insert prefetch instructions to request data before the processor needs it. There are two flavors of prefetch:
Register prefetch will load the value into a register.
Cache prefetch loads data only into the cache and not the register.
Either of these can be faulting or nonfaulting; that is, the address does or does not cause an exception for virtual address faults and protection violations. Using this terminology, a normal load instruction could be considered a “faulting register prefetch instruction.” Nonfaulting prefetches simply turn into no-ops if they would normally result in an exception, which is what we want.


13priya

Likes:
Be first to like this answer

Dislikes:
Be first to dislike this answer
Talk about thisOnce you have earned teacher badge you can edit this questionDelete|Like|Dislike|
------------------------------------

You dont have permission to add an answer herePlease see this

Join us to pass any exams easily