AMD CPU speculation... and expert conjecture

Page 54 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

one of the inherent problems with water cooling. There is no air movement at the cpu, hence stagnant at the vrm heatsink.

I put a 120mm fan on my top video card pointing right at the vrm heatsink, temps dropped ~10C, but mine never got to 90, max was ~80, now it only reaches high 60s.

also make sure you put LLC on ultra to cut down the required voltage. My 8120 runs 4.7 ghz @1.344v
 

tonync_01

Honorable
Feb 18, 2012
151
0
10,690


That's what I'm wondering. Is it going to be something like a FX-8370, with just a clock increase or a FX-8450.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

checking that list, the biggest disparity seems to be in mass effect 3, some of the games actually favoring AMD, RE5 and civ V, so lets look at the worst.
CPU1.png


the difference between the 2500k and the 8150 (even though there are oddities in the chart itself), the difference is ~17% in favor of Intel, no where near the 35% that starcraft II holds for intel for the same 2 processors.

My arguement still stands, Starcraft II is NOT the difinitive proof that AMD sucks as the game itself is well beyond the norm.

One thing to note with mass effect 3 is that it seems to favor heavy memory bandwidth, hence the I7 920 and the 3960X's position with their quad channel controllers.

Also goes to say that AMD needs to continue to work on their memory controller.

oc-fx-8350-aida64_memory.jpg


 


To be fair here, at 60Hz refresh monitors, the motion sensation is still lacking in fluidity (is that a word? hahaha).

I can attest to 120Hz being FLUID in all it's capital letters using a HD7970 + 2600k @4.4Ghz (a friends setup) against my GTX670 + 2700k @ 4.4Ghz. I only have 60Hz monitors and the motion sensation is a thousand times better in the 120Hz one. And no, I don't ever drop under 60 FPS on a game and I'm pretty sure crappy V-Sync implementation has a play in this. Too bad most games can't use Lucid's V-Sync thingy with no issues. Hell, it's even noticeable using Aero in day to day basis.

Well, it's an interesting effect, but it has to be seen to actually understand and accept it. 60Hz with V-Sync is nowhere near as fluid as a 100FPS under 120Hz and no V-Sync on. There could be a thousand+ technical words to explain it, but see to believe is best in this case IMO.

Point is. The Phenom II I had at 3.9Ghz and the same GTX670 produced a tad worse image fluidity on my 60Hz, but that's because no v-sync in most games AND topping in about every game with 50-60 FPS. So frame generation was about the same as the refresh from the monitor. It's also interesting to note that if you're CPU capped, then leaving V-Sync off actually pays a little better towards fluidity (under 60FPS or under 120FPS, off course). This is also why RAGE's "fixed FPS" idea is very good, but that's another argument entirely, right gamerk? Hahaha.

I'm thinking about putting a cheap FX8350 setup to actually compare it to my i7.

Cheers!

EDIT: Noob, remember that compiling is just the tip of the iceberg. You also have underlying algorithms to solve simple problems and a bazillion other stuff around that can put different burden on the CPU. DO NOT FORGET :some_meme_character:
 
This is also why RAGE's "fixed FPS" idea is very good, but that's another argument entirely, right gamerk? Hahaha.

Problem is, its hard to determine what FPS you are going to get to begin with. Basically, what Rage "tries" to do, is after FPS takes a hit (say, a few heavy-workload frames), it reduces quality to make the rest of the game "catch up", leading to a period where you get a good slowdown, followed by just as big a speedup. Not good from a latency perspective at all. [Rage would be a VERY interesting test case of latency benchmarking; I'd expect it to jump around a lot, based on what I've read it tried to do under the hood].
 

yuka, as always I appreciate your additions here, but if it was an apples to apples test, then there had to be something more to it, as you couldnt flip a coin and come up so skewed
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Why would you assume they don't have fabrics? There are many to choose from.
 
I'm thinking about putting a cheap FX8350 setup to actually compare it to my i7.

It is a curious test, the 8350 does surprise in a few instances, so long as you keep them realistic. Consider the i7 a Bugatti produced on an extremely expensive process line, while the 8350 is more Porsche, not a pure super car but bags a ton of creature comforts on a fraction of the production cost.

I tested a 3770k vs my now aging i7 980X, while in gaming and single threading the 3770k wins a few due to being 4 years newer, when it comes to pure content muscle the 980/1366 platform just rapes the Z77 platform in every regard. I was asked whether I would take a 3770k and highest end Z77 or go with LGA2011 and the answer is easy, LGA2011 if you want pure power makes 1155 look like child's play, sadly the 3770K has become looked at as a low cost content beast when compared to a 3960X in that realm it gets slaughtered, made even worse when a $190 FX8350 can produce similar results. To me 1155 i7 is a perversion of everything that made LGA1366 the alpha platform, the 1156 i7's were a disaster if you wanted the real deal you'd get a 1366.
 


The same link you left a few posts back?



Yeah, it would be very interesting to see what FRAPS get out of that engine. ID Tech 5 is quite amazing IMO. I really wish more games could actually use it 8(

It ran so good in my old 4890. SO GOOD I TELL YOU.



Like what? This is just a mere perception thing, but I can tell you that it is noticeable. Most problems with fluidity, don't come from raw number crunching from GPUs or CPUs, but the algorithms used to implement V-Sync. And all the other crap they want to put in the game, off course, but V-Sync being the most important IMO.



The 980 should be around the same ballpark as the Beyron; that's going to be quite a nice track match when it's ready.

Anyway, ironically, what limits the FX8350 to be a complete computing solution is not it's raw crunching number prowess, but the HT link and dual channel config. See, MU_Engy? It's all AM3+'s fault!

But in all seriousness, how much bandwidth and improved latency do G and C sockets have over the AM3+ HT link? I know LGA1366 and 2011 have a crippled QPI, but it's still hellish fast.

Cheers!
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810


If on a bang for buck budget I would get 6300, and overclock it to a moderate 4.4-4.5Ghz.
 


Yeah, the 8xx/9xx chipset is getting a little old. Not that any new technology it offers is of real immediate concern (e.g. PCIe 3.0) but it is getting long in the tooth. I predict AMD will refresh it to something new if they release Steamroller CPUs on AM3+. The resulting chipset will likely support PCIe 3.0, USB 3.0 and the whole nine yards, and will likely be the last chipset for AM3+. I predict that AMD will reunify the desktop socket lineup after AM3+ has run its course by releasing a single socket which can support both APU video/PCIe output and 125-140 W CPUs. My *guess* is that AMD is waiting to do so until DDR4 starts to be viable since AMD doesn't like to do the socket shuffle as much as Intel. They will likely refresh all of the sockets on all of their platforms to support DDR4 at around the same time- replace FM2/AM3+ with a single socket, replace C32/G34 with something that supports on-die PCIe and a DDR4 IMC, and replace mobile with a socket which supports DDR4.

I agree with most of your points, but like I said, AM3 to AM3+ (or 700, 800 and 900 series) only have HT improvements and SLI support in the latest incarnation as key differentiation. That's about it. Oh, and DDR3 support in the 700 series chipset. Feature wise, they've always been rich, but the problem has always been around the upgrades given to each chipset gen. Why no AM3+ with an accompanying chipset and USB3 support right from the start, for example? It should be for the 1000 series chipset, but that's not confirmed either AFAIK. I'd love to be proved wrong, haha.

DDR3 support came from AM3 itself, the chipset is irrelevant as all K8 and later AMD chipsets are just non-coherent HyperTransport devices. You can technically hook up an NForce 3 to an AM3+ socket if you are so inclined. Again, I expect a "1000" series northbridge with an "SB10xx" southbridge with PCIe 3.0/USB 3.0 support if AMD puts Steamroller on AM3+ instead of a new socket, which I think they'll do.

Anyway, 1155 > 940! hahaha

Cheers!

EDIT: Also, AFAIK, QPI has less latency than HT and more bandwidth.

I raise your 1155 pin socket with 7776 pins' worth of sockets :lol:. QPI is only faster than HT in the absolute newest, fastest 8 GT/sec version, which relatively few Xeons have right now.
 
might be getting old but works good, what amd needs to do is come out with a lower than 90 watts high performance desktop cpu at a attractive price.

only time will tell. i bought am3+ so i could get steamroller but im curious to see what the final product looks and performs at.

 



Be very very careful with that. That's only telling you what the main executable was built with not what the individual libraries were compiled with. A common tactic is to use VS to compile the final product but each component may be compiled before hand via a different compiler. I was doing that back in high-school using a combination of C++ for libraries and VS3 (might of been 4 can't remember) for the main executable + gui.

To really understand whats going on you'll have to check the instruction streams that are being sent to the CPU. Reverse engineering each library may yield clues to how it was all put together.
 


7776 pins? What the hell has that amount? IBM sockets? o_O

And yeah, I also concur on the next socket being reserved for DDR4, moving NB stuff on die and "long lasting" goodies, like in the current 939/940 design.



Well, 90W- are more of a die shrink necessity than a useful thing. You can always bin CPUs to target a certain power envelope. Thing is, Intel has a HUGE advantage there with their process nodes being so good and far ahead than GF and TSMC. So, for a given uarch, you can target pretty much every power envelope to some degree, and that degree is "usefulness". You could always clock an i7 to 400Mhz (I think that's the min voltage threshold and multi allowed), but I'm sure people won't buy it even if it zips less than your smartphone (not really the case, but oh well).



Yeah, I know. The main executable is always compiled with some windows-compatible compiler, and what better for that job than MS'es, hahaha.

I don't want to dig any deeper, but it would be an interesting study case :p

Wanna shed some light on how to? I own SC2, so I can provide info on that with some help to do so.



Heh, wait till you get to convert C to ASM and back. And no, that's manually, not with a compiler :p

Cheers!
 
im happy building stuff. and fixing pcs. programmings not my passion. but shur its someone elses passion. also putting more stuff on a chipset etc gonna help. also shrinking in nanometers the cpu size.

ddr4 gonna be alright i guess, but ddr3 is somewhat faster than ddr2 but not a huge difference as i onced beleived. hope ddr4 has killer timings and 3k+ megahertz speed.

 

truegenius

Distinguished
BANNED
Microsoft Xbox Next: Multi-Task OS, Multi-Core x86, Large Hard Drive, But Is It Good for Games?
http://www.xbitlabs.com/news/multimedia/display/20130212235450_Microsoft_Xbox_Next_Multi_Task_OS_Multi_Core_x86_and_Large_Hard_Drive.html

Microsoft Xbox Next (code-named Durango) will be based on multi-core system-on-chip designed by Advanced Micro Devices that will feature eight x86 code-named Jaguar cores running at 1.60GHz

with 768 stream processors on chip, the GPU should provide peak computer performance similar to AMD Radeon HD 7770 (Pitcairn)
 
Status
Not open for further replies.