AMD CPU speculation... and expert conjecture

Page 340 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
That's why I said it was niche, not many people do it cause of the learning curve. It takes awhile to get used to but once your there it's amazing. It requires about 2x the GPU power but very little extra CPU power (the game only renders once, its the driver that splits it into two) so it fits into SLI setups perfectly.
 


Do you read? Because I mentioned two possible successors of the FX. From the above spoiler:

upgrade FX-6000/8000/9000 series for gaming PCs. Those upgrades would be based in PD refresh. [...] AMD could use those new dies for refreshing the six-core FX and octo-core FX series. They can even change the name. The same that "Opteron" has changed to "Warsaw". FX can be changed to a new brand.

I also speculated about a 8-core high-performance APU replacement...



Do you read? Your rant against Tegra was replied before. I don't bother to reply again because the high-performance server SoC is not Tegra... Tegra is for mobile.



Seattle is 2x-4x faster than the Opteron-X. Evidently 2x-4x faster is not low-end...



Nope. First, Opteron are replaced by Xeons because Opterons bottleneck the CUDA GPUs. Second, no you cannot increase the performance by merely adding GPUs. You need to feed them, which implies adding more CPUs.

Third, a x86 supercomputer would be 10000 times faster at the expense of consuming 10000 times more power. You couldn't generate enough power and you couldn't run the computer.

The ARM supercomputer goal is to be 1000 times faster consuming less power than current x86 computers...

Of course the European project for exascale supercomputer is not "NVidia saying..."LOOK HOW POWERFUL TEGRA 27 IS!!!!" or whatever Tegra they're going to use". Stop posting your usual Nvidia rants for attacking one of more interesting HPC projects. You have no idea about the topic... just stop.
 


Your prediction is so incorrect as was your prediction that i3 would be excellent new gaming machines because new gen gaming engines couldn't scale well above two cores...
 


Do you even read what you write?

1. Your confusion about the word CISC was funny, but I was really referring to all the further nonsense that you posted against RISC/ARM.

2. The chip runs background processes to the game such as social stuff, recording, checking for online updates... it doesn't run the OS.

3. You said that that you "saw" PS4 benchmarks @2.6 Ghz, that you waited the "actual final specs will likely fall closer to the 2.4 mark, with possibly a turbo core feature to allow a higher clock under some instances that use fewer cores." Then you strongly misinterpreted the PS4 patents doc and said to everyone here that the CPU run at 2.75 GHz. Even hapidupi had to correct your nonsense. Hapidupi! LOL!

4. You claimed that bulk was impossible, that the people who said bulk was no working for AMD anymore, you insinuated that he didn't know the tech and pretended that the delay was caused by the migration to FD-SOI. When people complained. You did search and post a talk by Glofo and tried to convince us that FD-SOI was cheaper than bulk and AMD was going all SOI. "It is evident" you claimed...

5. Therefore you still insist on saying every novice that a SR FX 8-core CPU is coming for AM3+ and that AM3+ is a good platform for upgrade?

6. Easy. That is an ancient mobo with AM4 in the model code. Someone at the database did a mistake and indexed it as AM4 socket mobo. The rest is only in your imagination. It is not a new mobo with a new socket for a forthcoming 5GHz SR 8-core FX CPU...
 


Interesting that you mention L3. AMD has improved both L1 and L2 caches in Steamroller and apparently they have isolated the reason for the unusually high L3 latency in the Bulldozer architecture, however AMD claims that fixing it isn’t a top priority... Interestingly all the steamroller products that we know are L3-less processors, aka APUs. It makes sense that they are not fixing this. I doubt that they fix this even for the new PD-based Warsaw.
 


As mentioned above Dell, HP and other are releasing such servers.



Hum, FM2+ mobos have been available to purchase for a while now 😉
 


If legit, it confirms that SR is ~30% faster than BD. Precisely in my estimation of kaveri performance I assumed a 30% faster than BD (or ~20% faster than PD) to show that Kaveri CPU would be at the i5 level of performance.

The regression in FP could be related to SR using a simplified (streamlined) FPU. This is not a serious problem because SR comes in HSA APUs where the GPU will be used as a giant FPU.
 


Finally something to watch for
 
Decided to do some CB15 tests real quick and it's really easy to see the 20% efficiency hit the process takes when both cores are loaded on a module.

FX8350 @4.7Ghz
Multi: 727cb
Single: 110cb

727/8 = 90.875/110 = 82.61%

If AMD introduced a 2nd integer scheduler for the front end that revealed some of that efficiency hit, would explain the improvements. And even with a benchmark as biased as CB the fx8350 does remarkably well for it's price point.
 


I don't think you understand the points I was making about RISC/CISC, however, you don't have the architecture knowledge to understand it anyway. So, I am going to write this off as you simply don't understand. In gamer terms "Learn 2 Research".
--------------------------------
You did not read a word I said:
2. Sony has confirmed an additional chip on the PS4 to run background processes, and the design lead for the entire project stated the OS would be run primarily off the GPU leaving 8 cores available for developing games. Do you think Sony's project lead was lying? Why would he? That information will be verifiable easily...

Can you read it better now?
--------------------------------
There were ES benchmarks run @ 2.6 GHz, Jaguar cores will actually clock quite a bit higher than that, you know that right? They were likely testing to see what the thermal limits and power consumption would be. I did say that I suspected it would end up closer to ~2.2-2.4 GHz, than the 1.6 GHz many were claiming. By the way, what is the clockspeed on the PS4 APU? Just curious...oh...that's right...it's 2.0 GHz. Who was right?
--------------------------------
I never said bulk was impossible...I did say it was improbable, and that there would be massive clockspeed and thermal penalties to going bulk. I was not wrong on any of those counts. Your poor mastery of the English language is showing.
--------------------------------
I tell them a SR FX replacement is coming, I also tell them I have no idea what socket, or in that regard, much of any information about it at all.
--------------------------------
It could very easily be the case, *very* easily; however, Foxconn should not be advertising AM4 socket MB drivers when no such product clearly exists. I was a bit curious when I saw that...and pointed it out to you and a few others to confer. The consensus, that it was not what it looked like, was reached before you ever wrote back to me.
 


They're probably tweaking it to prepare for 256 bit FMAC FPUs (AVX2 instructions) in excavator and running into some issues. That would be my guess...

EDIT: The APUs will have stream processors to work as FPUs in HSA or OpenCL supported products, otherwise the APU is worth a bit less than the FX 4350.
 


VRZone is saying the Kaveri NDA for desktop ends on December 5th. So looks like we should know just about everything in a little over a month.

 
as the launch gets closer, rumors and leaks will become more and more credible...as credible as they can be unless they're from wccefghiparrotpromoslidestech.com 😛
have your NaCL ready and watch out for controlled benchmarks e.g. 'core i7 killer' ones.

too bad this thread might need a sequel with only(!) 33 more pages left before it hangs.... <-this is a speculations.
 


Can it beat twin OC'd 770 GTX's at ~$300 apiece?

Seriously, that's the way to go this gen. Even as much as I hate SLI'd configs, that's the way to go, rather then eating $700 on a single card.
 
Nope. First, Opteron are replaced by Xeons because Opterons bottleneck the CUDA GPUs. Second, no you cannot increase the performance by merely adding GPUs. You need to feed them, which implies adding more CPUs.

I want to highlight this point: The only reason's GPU's need to be "fed" is because most tasks have significant setup time, and single GPU units stink performance wise. There are, however, some tasks, where the setup time is minimal and you gain near perfect scaling. For those tasks, you don't need a traditional CPU; you could run everything off the GPU.

In any case, once the setup is done, for parallel tasks, the CPU does little more then push data across the PCI-E bus. And the entire point of HSA is to get rid of that bottleneck anyways...
 


Hence why you need to understand benchmarks, and not go on one when discussing performance. For instance, while you could see 20% gains by adding a second integer scheduler, for tasks that don't scale, guess what? No (or minor) performance benefit.

Hence why benchmark results need to be understood, not just read.

And I'll say it again: This thead is becoming very similar to the BD thread right before BD launched.
 


All I have to say to that is: "Fool me once, shame on you. Fool me twice, shame on me".

Anyway, if the FPU side of things got a regression for SR, then it means it will suck for gaming at least until HSA is out there. And even so... How will it work in tandem with graphics? Does anyone have even a remote idea on how that could work? 😵

Also, GPU intensive applications (yes, games) will suck because the most used resource (iGPU) will be trying to do 2 things at a time.

Man, I really wonder how that is going to turn out.

Cheers!
 


CB15 is a lot more fair. I already took it apart in IDA and it doesn't use libguide40.dll anymore (an Intel library for OpenMP) and it seems to use a less unfair compiler. You can compare the difference between AMD and Intel chips in 11.5 and 15 and notice how AMD magically closed the gap.





Yum, that's some Nvidia fanboy logic.

A year ago when 7970 CF was the best value and GTX 670 or 680 in SLI was much better than Titan
don't deal with multi-card lol it's terrible

Now when 290x is the fastest single GPU card
you're better off buying two cheaper ones, preferably Nvidia!

Single GPU cards have always had a premium over dual GPUs because you lose things with multi-GPU like

1. Poor scaling in some games
2. Waiting for driver updates
3. SLI/Crossfire only works in full-screen windows
4. Significantly more power consumption (in before "LOOK I PUT TWO 250W graphics cards in my computer but I saved 40w on my CPU! GOD BLESS YOU INTEL!"

APU13 is coming, it's about time for "leaks" to start showing up.

 
And I said I typically don't recommend dual card setups, for the exact reasons you mentioned. Hence why I'll pay an extra $100 or so to get a higher tier single card. But when the difference is more then DOUBLE the next card down the line? Price/Performance doesn't work out.

What ironic is the price of the 770 seems to have skyroketed since it came out; I got mine for $349 at launch, and its now at the high $450 range. Go figure...
 

$350 at launch, MAN, where did you get such cheap prices. I live near Microcenter and I would not expect something like this even from them 😛 I was referring to the Regular 780, which is a good $100 cheaper than the 770 SLI, but at the cost of overall gaming performance. As soon as the 7970s went down to $350s and the 770s jumped to like $420, I said f**k that and went 760, simply because the ASUS 7950s were not that good 😛

I agree with you, I really don't like SLI or CF setups, they can be a pain with stutering and bad scaling, it is just not worth it. I usually go for a mid-high end card and wait off for two gens... The 290X certainly looks nice, but I am a happy camper with my 760.. The last time I ever tried something ambitious (SLI) was with Dual 7900GTs on a AN8-32X that lasted for like 2 years, it was great for the most part, but a PITA in some games...


Now, to see if the K70 is a worthwhile upgrade to my old, 1990s Model M.....
 
Status
Not open for further replies.