Phenom II and i7 <=> cache design for gaming

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Unless you have a crappy CPU behind it then the GPUs power is wasted since it cannot push further than the CPU can pass on data to it.

hence why I said there needs to be a balance. Most current GPUs will not bottleneck a quad core pre Core i7. But multiple ones will.



Cache design is great but you also forget that Intels cache design and prefecth are superior. Intels L3 acts as a buffer to save any data that has been in the L1/L2 cache to easily access and resuse faster than memory.

But because there are such superfast interconnects (QPI/HTT) having to access memory is not a burden. Most of the interconnects run at about 10GB/s (17GB/s tri channle DDR3) and will just get even faster. Being able to push that much data around makes it easy to utilize the memory.

*Edit*

You are just looking at L3. If L3 was a main deciding factor then C2Q would not be able to keep up with Core i7 or Phenom II in high res gaming but it does with one or two GPUs.

The biggest cache to worry about is the L2 cache since it will be utilized more often than the L3 cache will for games. The L3 for Intel is a sort of buffer as I said before so if the game needs to put through more data that it already had it wont have to look in memory or access the HDDs but rather grab it from the L3 cache which is on die. I am not too sure how Phenoms L3 works in comparison.
 
2 things
Why does X3 run SO well on P2
And, we will soon know if i7 is truly better soon, with the new cards coming out, and the "bottlenecks" which do exist sometimes, where Ive seen others claim far too many, if i7 pulls away or not. Not saying t wont just sayin we'll at least have a better understanding after those cards come
 

Why don't you explain then 😉

I mean the information about how caches are designed
 


You didn't explain why Phenom pulls ahead just before the game is 100% bottlnecked by the gpu in some games
 

Before you can keep claiming this, you really should show some proof...

 


you did post one graph where phenom performed better just before the game was 100% bottlenecked by the gpu
 


I'm not seeing it. They are all within a fraction of a percent and in some before beyond 1900x1200 the Core i7 performs better.

but the only one where the Phenom II performs better it is by such a small percent its negligable.
 


How is you math?

If you take 100 frames. 95 of those frames are bottlenecked by the gpu, 5 is bottlenecked by the cpu. do you think you will get big differences?
 
And I mean X3. Every time Ive seen it beched, it stomps i7 whatever, the AMD chips. Theres never been an answer, and benchers usually stay away from it.
Anybody here remember any of those benches, and if so, know why?
Its an odd man out, I realise, but it could give us all a better understanding of the differences between the 2 arches
 

My english isn't that good, I am trying to understand what you are saying but don't exactly understand.

I am not a supporter of any CPU. What I like is advancements in hardware (I am a programmer) it is important that good solutions gets credits for that, software is almost allways behind the hardware so it isn't allways simple to understand what is best in the long run.

If I need much database performance and don't care about poweruse then I would by i7. i7 is a very good server processor and I think that is the main target for that cpu. i7 has two problems, poweruse and price there. There are a lot of servers that doesn't need performance
 



What's the matter? Are you incapable of exceeding a second-grade education?

You are the one that does not understand,

BTW... the shark is a symbol commonly used to keep people in a constant state of fear for the
purpose of control. Only Tyranny uses it.

 
I agree, SW lags behind, and its a crapshoot regarding trends, but i7 shows its power/perf is fine for the most part, and usually its that lag of SW you mentioned where i7 is hurting in perf, tho, then it uses much less power as well
 
If you're talking about Crysis, 2560x1600, that's a difference of 0.83%. If you're talking about Left 4 Dead 2560x1600, that's a difference of 0.288%. In both cases, the difference is well within normal variation. Basically, the scores are the same to within the resolution of the test.
 


Now you're speculating - the way to prove this is to show some sort of framelog or similar - demonstrate a clear difference in min. fps, or fps in certain scenarios and you might have a point. Currently however, you are just trying to find some possible way to make a PhII sound good, which is getting rather old honestly.
 

Say what you will, but in all of your posts here, you certainly come across as someone with a very strong anti-Intel bias. I'm not saying that you are necessarily biased, but your posts have certainly not done anything to try to dispel that idea.
 

The problem is that you don't want to understand
 


I am talking about crysis 1920x1200
i7 @ 3.8 vs Phenom II @ 3.64 = 0.044 or 4.4%
29.42 fps vs 30.41 = 0.034 or 3.4%

4.4% + 3.4% = 7.8%

And remember that i7 L3 cache runs faster compared to Phenom II

This is NOT within normal variation. If you put up ten other computers and run same test you will notice same behaviour
 
At least now youre specific.
Then to really get down to it, wed need as many examples as we can find to make this a reasonable assumption, or fact, right? You have more benchmarks? That shows this disparity, even at other frequencies?
 

Latest posts