AMD CPU speculation... and expert conjecture

Page 53 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810



"Facts be damned. They dont mean anything.
Can facts hide the fact that Intel uses CPU vendor id to gimp code on AMD procs. And that AVX code for Ivy bridge does not run on Athlon procs. And that Intel spends resources on a game dev to give poor performance on AMD procs" :whistle: :lol:

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


What does this mean ? My English is pretty good, but i cant make head or tails of this.





As far as optimizing for one and not the other, you check the cpu id flags and NOT THE CPU VENDOR. There is no reason to check the vendor string other than to cripple the code, ie sse2 for AMD or AVX for GenuineIntel.

I dont think any of us has said ICC does not discriminate against AMD procs. Of course it does. If it had not discriminated against, i would be surprised. Contrary to popular belief, Intel is a for-profit company.

But
Forget ICC checking for flags,SSE,CPUID, vendor string and other sh!t , did you read the part that said ICC produces faster code than Visual Studio , which 99.9% game devs use on windows ? .

Fact : Did you know that in VS, there is no way to specify SSE2/SSE3/SSSE3/SSE4/SSE4.1 optimizations ?
There is only "optimize" and "optimize with AVX" options.
 
OK, enough about the compiler issue.
We know it can be used at times, not often in games, we know its set up for Intel/made by Intel, and possibly some purposeful negative effects.

If it applies directly to the ongoing conversation, meaning it can be proven, and it does indeed cast a wide net, then lets leave it alon for now
 
Well, AMD has said a nice improvement, not actually giving perf numbers themselves.
But, clocks wont be the main issue, as theyll be pretty played out by then, so itll have to be pure arch improvements, plus some power management as well
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
considering that PD was released sometimes in Oct (?), a june release for SR is too soon for major arch changes, IMO. Also consider that AMD has fired lots of the engineering staff.

So the june release of SR could be a rumour.
 


I've got an 8350 running on a Sabertooth with water cooling and ran into a slight problem. The 8350 overclocks like a champ, can get it to 5.0Ghz but the Sabertooth's VRM's get way too hot. Running it faster then 4.7Ghz ends up with the VRM's slowly heating to 90c while I run hyperpi on all eight cores. Once they hit 90c an emergency cut off on the board happens and everything shuts down. So while the CPU can easily hit high performance it seems to require a massive amount of power beyond ~4.6 Ghz. I've been thinking about replacing the VRM heatsink with a VRM waterblock, should be able to crank it to 5ghz or so stable.
 
As for the OP topic. I'm not holding my breath, they simply don't have the resources to really drive a game changing design. I expect another revision to the predictor and maybe some better L2 cache latencies, not much else. A shrink will bring about lower voltages and thus better temps at a given clock. You can then raise it's clock a bit but past 5.0Ghz you really hit a point of diminishing returns.

The future isn't in super powerful CPU's, never was. We've hit a saturation point where low end CPU's are capable of doing everything that ~90% of consumers want to do and then some. Now it's become a game of miniaturizing everything into small powerful packages that utilize low power when not in use. People tend to leave their common use electronics on 24/7 even though they only use them a fraction of that time, reducing power during idle has a been a big focus in the industry.

Honestly guys (gals) I'd stop looking at who has the ultra fastest 1920x1080 16xAA 16xAF Super Ultra Insane settings CPU, that's nice for bragging rights but isn't indicative of what gets put in peoples homes. Typically it's a low to mid range system from of OEM like Dell / HP. More and more their becoming notebooks or all-in-one designs. This is why the APU is what interests me the most, I want to see Richland and what combined performance level it can offer. Toms ~$500 Mini-ITX system build was a very good idea as what to expect in the future. Via's little monster is out of the closet finally. Wonder if anyone is gonna build a nano-ITX or even a pico-ITX system.
 
we need amd to make architecture changes. so that even lower clockspeed processors are more performant, and thats what future cpus need to be lower clockspeed but optimized architecture that performances very fast and uses less energy.

 

truegenius

Distinguished
BANNED
Moderator palladin9479 :eek:

The 8350 overclocks like a champ, can get it to 5.0Ghz but the Sabertooth's VRM's get way too hot. Running it faster then 4.7Ghz ends up with the VRM's slowly heating to 90c while I run hyperpi on all eight cores.
shocked_cat.jpg

sabertooth is not able to handle 8350 :lol:
32+8 vrm anyone :p or PowIRstage® ICs :whistle:

The future isn't in super powerful CPU's, never was. We've hit a saturation point where low end CPU's are capable of doing everything that ~90% of consumers want to do and then some.

so do you think that a phenom 2 x4 @4.5ghz is enough?
 
sabertooth is not able to handle 8350 :lol:
32+8 vrm anyone :p or PowIRstage® ICs :whistle:

The boards heatsinks can't handle the heat generated from the VRMs. I have no doubt the VRM's themselves could handle the load, they simply need to be actively cooled. Granted this is the 1st generation Sabertooth designed for BD. Don't know if they put in better cooling on the R2s, I suspect it might be something as simple as resilvering the thermal material between the VRM and it's heatsink.

so do you think that a phenom 2 x4 @4.5ghz is enough?

@3Ghz would be enough, heck you could probably get away with 2.5Ghz for pretty much everything. The primary concern is to have two to four independent processing units so that one hungry application doesn't consume 100% of the available resources and cause the system to stutter. People tend to multitask and the ability to concurrently run many independent applications has become paramount. The only hold outs are the gaming industry as when consumers play games they tend to give the game most of their focus and the multitasking is kept to a minimal. Though I personally still have waterfox opened with a dozen or more tabs (Treestyletab ftw), Ventrilo or Skype with guild mates on.

Thanks everyone /bow
 




There is nothing wrong with AM3+. The only thing it does not support is an on-CPU northbridge/PCIe controller. That is arguably really only important for small form factor/laptop/SoC type of situations where reducing chip count is critically important for packaging reasons and because it might use a watt or two less power. AMD also makes the FM1/FM2 line of sockets with northbridge/PCIe on board anyway for those that want it. Having an off-die northbridge/PCIe controller is not a bad thing since it gives greater flexibility in your choice of CPU vs. I/O capabilities. If you want more than 20 lanes of PCIe on an Intel platform, you have to step waaaay up in CPU and board price to LGA2011. Either that or spend a bunch of dough on an LGA1155 board with PCIe switches. Getting a midrange AM3+ chip with a 990FX board gives you 42 lanes of PCIe without having to spend $300+ on the CPU for the privilege.

The other things people ding about AM3+ are memory bandwidth, that it uses HyperTransport, that it is based on a socket which debuted in 2006 (AM2), and that the 800/900 series chipsets that ship with the platform are old.

- The only one with any real merit is the last one. AMD could easily update the chipset to something with PCIe 3.0 support and probably will if they continue to use AM3+ for much longer.

- Bandwidth: AM3+ has more bandwidth than LGA1155 as LGA1155 only officially supports two channels of DDR3-1600 vs. AM3+ supporting two channels of DDR3-1866. LGA2011 has more bandwidth with four channels of DDR3-1600 but LGA2011 is really a server platform. Also recent tests by Tom's show that there is very little difference in performance between two and four channels of memory in LGA2011. Also, G34 is LGA2011's real competitor and G34 has more bandwidth with four channels of DDR3-1866.

- HyperTransport: has way more than enough bandwidth to feed the off-die NB/SB on a desktop platform. AMD uses HyperTransport turned up just a fuzz faster (2.6 -> 3.2 GHz DDR) to connect eight-die quad Opteron setups for crying out loud, and many of those connections use split links. Intel also ripped off HyperTransport for its QuickPath interconnect. HT bandwidth/latency has really only been a problem at one short period in time when AMD didn't get HT3 ready in time for the 8-way Barcelona quad-core Opterons. An 8-socket server platform is a far cry from the desktop. And even then once AMD upgraded to HyperTransport 3 with the Shanghais, they were fine again and continue to be fine using HyperTransport 3 with the C32/G34 Opterons.

- "Old socket." Who cares exactly when a socket was conceived when it still works? Most people tend to be frustrated by sockets changing too quickly and giving them no upgrade path at all.



At least a decade if it can happen at all. Quantum tunneling and leakage is supposed to make things below 10-14 nm not viable. But then again way back in the day UV litho was said to be too difficult, then EUV and immersion was said to be too difficult/expensive, so we'll just have to see.



My impression is that the GPU cores in HSA will largely replace the FPU rather than the CPU. Heavy FP stuff can be made to run on a GPU much easier than CPU stuff can be made to run on a GPU. My hunch is due to the FPU now suddenly being separate from the cores in Bulldozer/Piledriver (makes it easy to in essence point the FP scheduler at the GPU and let 'er rip) and the big successes of GPGPU have all been with heavy FP tasks.
 

BuddiLuva

Distinguished
Aug 17, 2012
595
0
19,060


Boooyaaahhh!
 
the amd 8350 even @ 4.0 is beast. and 4.5 is screaming

also who needs more than 8cores @ 5.0 unless ur using liquid nitrogen

im pretty shur the circuitry and power stage of the motherboard can handle higher if cooled right.

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
price is of course higher.

What i am trying to find is, for new games :
1which proc gives the best price/performance ratio ?
2. which proc gives the best performance ?
3. How much is the performance diff between the above two procs ?

 


Well, AM3+ is tied to the 900 series chipset, so the hate is towards the chipset more than the socket to be fair.

I agree with most of your points, but like I said, AM3 to AM3+ (or 700, 800 and 900 series) only have HT improvements and SLI support in the latest incarnation as key differentiation. That's about it. Oh, and DDR3 support in the 700 series chipset. Feature wise, they've always been rich, but the problem has always been around the upgrades given to each chipset gen. Why no AM3+ with an accompanying chipset and USB3 support right from the start, for example? It should be for the 1000 series chipset, but that's not confirmed either AFAIK. I'd love to be proved wrong, haha.

Anyway, 1155 > 940! hahaha

Cheers!

EDIT: Also, AFAIK, QPI has less latency than HT and more bandwidth.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
So... Is the PD refresh supposed to bring faster processors? I heard they were supposed to be Sandy Bridge Performance... but I don't know how true that is, because AMD stated that there are going to be minor improvements to the archisturcture. I don'k now how AMD is going to squeez more performance unless they redesign or change die size. Is there going to be like a FX-8450? Or something along those lines? A new AMD Champ?
 
Status
Not open for further replies.