L1 and L2 both type of caches present on processor then why L1 is fast than L2 chahce

Solution


Not always. It depends on what the chip designers wanted. Some chips use DRAM for L2/L3 cache to save on cost.

It requires atleast 6 transistors per bit of SRAM, while a DRAM bit is essentially 1 transistor (with a bunch of support logic for accesses which can be shared with multiple other bits)

Most intel chips use SRAM for L1 and L2, and a higher-density per dollar memory technology for L3 (called Last-Level Cache by Intel), such as eDRAM or potentially SST-RAM in the future .

There is no definitive answer to what technology chip designers use.. It's purely a performance vs cost tradeoff.

Edit: Apparently I cannot spell or use the English language properly.
L1 is faster for two reasons. The first is simply that the CPU will always check L1 cache first before it ever goes to the L2 cache. So by the time the CPU tries to access L2 cache, it is guaranteed to have already attempted to access L1 and confirmed a cache miss, which takes a certain number of clock cycles. Of course this reasoning doesn't answer the question of "why not just have one very large L1 cache then and not have to go through a round of cache miss?" To answer that we go to the second reason why L2 cache is slower.

The second reason is that L2 cache referencing (the actual process of retrieving data from the L2 cache) is slower than L1 cache referencing, maybe by an order of magnitude. However, we are still talking about extremely short time periods (on the order of nanoseconds - or even less for L1 cache referencing). The primary reason for this is that L2 caches tend to be larger, and the larger the cache the longer it will take to retrieve data from specific addresses. It can get very technical but think of it like this: lets say you need to find a particular numbered page in a book. It will (on average) take you less time to do it with a short book than a longer book because there are fewer pages to check against the page you are looking for.

There are other, minor reasons why accessing L2 cache is slower, but the larger size of is probably the single largest contributor.
 
I agree with rgd's approach giving you a wiki article.

Caches basically hold onto data that is frequently used. Typically, L1 is smaller, hopefully very fast, and have separate caches for instructions and data. This way you can access cached data and exploit locality to the max.

Higher level caches shouldn't be accessed as frequently, and usually they are larger, and combine instructions and data. They are faster than going to external DRAM, but aren't meant to be hit extremely frequently. Some higher level caches are also shared between cores (L3 shared cache) which helps exploit the locality between multiple threads.

Also higher level caches have higher associativities, the additional logic for comparing additional entries of the cache for replacement does increase the critical path, which leads to a slightly slower operating frequency.
 
k got this, but why we made L1 so small, then rely on L2 and L3 caches. Why dont we make L1 cache so large that we dont need to rely on L2 and L3 cache.
and also explain all types of caches L1 , L2 and L3 are made up of static ram?
 


Fast SRAM is very, very expensive... Also too much SRAM in L1 will affect the topology of the logic on the CPU die.

And L1 only helps for immediately locality of data, L2 and L3 still assist in making the overall execution faster, by holding data with a high probability of access... just less local to the current point of execution.

SRAM is the fastest memory technology we have been using for a long time, which can maintain its data as long as we keep the cells charged.. Other RAM types such as DRAM and flash work different.
DRAM has a high latency of access, lower frequency, and require refresh cycles to prevent the data from dissipating (but is cheaper per bit). Flash memory works using a MOS-based capacitor (with a floating gate), and die-electric breakdown after a large number of write cycles limits the lifetime of the memory... You wouldn't want your CPU to have a limited number of memory read cycles! The write process is also a bit slow for modern CPUs..

Thats the short of it.. Do some research and it will make more sense.
 


Not always. It depends on what the chip designers wanted. Some chips use DRAM for L2/L3 cache to save on cost.

It requires atleast 6 transistors per bit of SRAM, while a DRAM bit is essentially 1 transistor (with a bunch of support logic for accesses which can be shared with multiple other bits)

Most intel chips use SRAM for L1 and L2, and a higher-density per dollar memory technology for L3 (called Last-Level Cache by Intel), such as eDRAM or potentially SST-RAM in the future .

There is no definitive answer to what technology chip designers use.. It's purely a performance vs cost tradeoff.

Edit: Apparently I cannot spell or use the English language properly.
 
Solution