Unfortunately I'm gonna have to agree here..
sk8er, you did seem to have a short fuse.
Honestly, you may need to start appreciating that some people aren't afriad to ask the 'dumb' questions because they hope someone will simpyl answer them.
right, to business.
I think I'm hearing our question FlyBoy.
Are we talking about the way the processor handles cache-misses.. how it refreshes it's cache.. where it puts data, who is replaced first etc..
I'm pretty sure this is the 'algorithms' your after..
The most simplistic algorithms are things like FIFO, LIFO.. etc.
Course, AMD and Intel would be way beyond the simple answers like this.
Starting from the start (now that's ingenuity)
A processor lumps code that it's using into cache, from there it can access it a lot quicker than from anywhere else.. but when it finds that the next instruction (or group of) isn't in cache it grabs it from RAM. Of course, cache isn't an endlessly generating plethora of memory cells, so somethings gotta make room.
In other words, some of the data in cache must be removed to make room. But which data... the processor doesn't want to remove something that it may need in the very near future,. preferably it wants to remove something that it will never need again. So it tries to guess which data is 'less' useful. these days it's a lot more comlpex than "let's get the oldest data and remove that" which is a type of FIFO (first in first out).. most would keep some sort of counter on each block and the one that hasn't been accessed for the longest time gets removed.
It takes silicon to do these calculations.. that's where FILO anf LIFO (first in last out OR last in first out)(same thing) might be useful, they take next to no hardware calculations to keep track of. but they're VERY VERY <b>in</b>efficient.
I don't know what AMD and Intel do for this but you can bet it's different.. just look at where performance drops off on data array sizes vs. total cache sizes..
The second algorithm which, no doubt, changes between processors is the way they write any changes to data in cache back to the RAM and/or HD.
two that I can think of now are 'write-through' and 'delayed-write-back'.
Write-through means that whenever a cache block is changed by the processor (ie. the program data is manipulated by the program itself) the changes are immediately made in the corresponding RAM blocks and whereever else it's needed.
In this way any block can be safely ejected from cache without losing the changes.
Delayed write-back waits for a cache block to be tagged fro removal before it bothers to write changes back through to RAM, etc... in this way in can save time wasted needlessly writing back the same locations over and over.
so there's pros and cons there.. it's another trade-off - The name of the game in electronics, computers and probably nearly everything.
Hope that gets someone somewhere along the track to understanding.
I spilled coffee all over my wife's nighty... ...serves me right for wearing it?!?