AMD Piledriver rumours ... and expert conjecture

Page 294 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Looks like the ARM mentality has taken over at AMD as well.

http://www.xbitlabs.com/news/cpu/display/20121029134908_BREAKING_NEWS_AMD_Teams_Up_with_ARM_to_Transform_the_Datacenter_Industry.html

Not sure what this will do to their X86 line but with all these layoffs it certainly means less engineers will be on each project.

NVidia is just now making headway with their Tegra 3 mainly due to Surface. I believe it's a 5th generation part so they've been at it a while. The better ARM cores are now semi-custom like what the Apple A6 and Krait cores do. The days of vanilla ARM cores are numbered. You can't just drop the new cores in there and compete.
 
It seems to be the right direction for AMD, while I think while it may be useful to make server chips around ARM, I believe they will gain the experience to design ARM processors for mobile devices which is where the growth is for consumer electronics.

I am no expert in ARM or CISC processors, but at the end of the day I believe CISC processors are more powerful. Will that change in the intermediate future (10 years from now)? I'm not sure, but at least an ARM based processor will likely cost less to manufacture, and it does consume less electricity. I think it would be alternative for small companies that do not need very powerful CISC based server CPUs and they can save on operating expenses (i.e. electricity). For medium / large companies, ARM based server could potentially be used for less critical tasks.

In the end only time will tell if there is a market for ARM based server CPUs and that will be based on their relative performance to CISC based server CPUs while taking into consideration the cost of hardware and electricity.

 



Given how poorly opterons sell compared to xeons, this might be a great news. :bounce:
 


That's true, the main suppliers in my country doesn´t know how long is it going to take before the 8350 ships here, guess I'll have to wait at least two weeks 🙁
 



Could you find me the link to the overclock of just under 12GHZ? Also, do you know the highest STABLE clock that someone has benched for the 8350?
 


All new benches are posted on HWBot for validation.

Status quo for overclocking a FX 8350 on conventional cooling you are looking at 4.8ghz-5.1ghz maybe a bit higher depending on your motherboard and the quality of the chip itself. Most will deliver stable results in that window.
 
Well it seems the last vishera processors are out, AMD FirePro APUs on sale in Japan

Depending on price they are going to be quite useful for students and low end workstations


http://elchapuzasinformatico.com/2012/10/apus-y-placas-base-amd-firepro-empiezan-a-ver-la-luz/
 


The FirePro APU coupled with a lower/mid range gpu intrigues me... Little bit of both worlds.

Edit: Hmmm, I can't seem to find the answer on google, so I'll ask it here. Do work station GPUs pull more PPDs with F@H?
 


So so, here I was able to get at least an 8120 when the bulldozers came out, but still the problem is that bulldozers didn't have the expected performance, so people stopped buying them.
 


In a word, no. Workstation class GPUs allow higher precision floating point operations that F@H does not require. Typically the workstation GPUs are clocked lower than gaming parts which affects the PPD more than any added HW or driver optimizations. Also, if you are interested in GPU folding, going AMD is throwing your money away. NVidia cards completely outclass AMD's.

If you have questions about F@H come over to our thread. (<--Clicky!)
 




Yea pretty much. They just sold everything out in a week, then almost nobody bought them after that. failty
 


I haven't done games in ages, and none for any major studios. I've done a LOT of independent work in the emulation community though, and a somewhat active member in the ROM hacking community, so I have a feel for how things play.
 


One SPE is disabled for yield reasons, and the seventh is reserved for use by the OS. That leaves six SPEs for developer use.

And yes, especially in the case of the PS3, you have an apples-to-oranges comparison when it comes to the architectures.
 


But its BENCHED as a SP game. Thats the point.

I don't know of very many games that use more than 1.5Gb of system RAM as is.

Not compiled as Large Address Aware. Even on Win64, Win32 apps that are NOT compiled as Large Address Aware are limited to 2GB Address Space usage. Throw in overhead for the Win32 API, code space, and the like, and there's your ~1.5GB of RAM usage. And that limits what you can do.

While devs CAN compile a native 64-bit executable, you don't want to have significantly different feature sets between 32 and 64-bit, so the lowest common denominator drives the game design. So you either have to stream in the map details as a player moves (which eats performance), or pre-load it all up front (limits what you can do).

Hence why Win32 has to die. You aren't going to see significant advances in games until it does.



And I'm REALLY interested to see how they pulled that off.


EDIT

--------------------------



But no one benches BF3 MP. Thats the point. Benching BF3 SP for CPU performance is silly.

latency IS fps

frames per second
seconds per frame

SAME THING just inversed. Longer latency = less fps. 16.7 miliseconds per frame = 60 frames per second. 32 miliseconds on one frame = an instantaneous FPS of 30 for that one frame, IE minimum FPS is the longest time between one frame. 99th percentile latency = minimum fps.

Not even close.

One way to address that question is to rip a page from the world of server benchmarking. In that world, we often measure performance for systems processing lots of transactions. Oftentimes the absolute transaction rate is less important than delivering consistently low transaction latencies. For instance, here is an example where the cheaper Xeons average more requests per second, but the pricey "big iron" Xeons maintain lower response times under load. We can quantify that reality by looking at the 99th percentile response time, which sounds like a lot of big words but is a fundamentally simple concept: for each system, 99% of all requests were processed within X milliseconds. The lower that number is, the quicker the system is overall. (Ruling out that last 1% allows us to filter out any weird outliers.)

http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/3

Look at this really simple example:

sgpu-bc2-6870.gif

sgpu-bc2-gtx560ti.gif


Same FPS between these two cards in the same benchmark, but one is clearly superior to the other due to more consistent latencies. In short, its a chart of how long, on average, it takes to create 99% of the frames. If the number is less then 16.7ms, that indicates a steady 60+ FPS is possible (given powerful enough H/W). If greater, that indicates frames are being lost.

Using my example: You see the AMD GPU has trouble creating one of the frames. It gets skipped over for one cycle, finishes, then begins work on the next frame. Hence why the next two latencies are below the normal average. FPS is identical though, latencies are not.
 

Would that be ROM'S for the Android system?
 
^^ For older consoles (particularly the NES/FDS and SNES). ROM images aren't OS specific. I've collaborated on several major ROM hacks over the years...

Case in point, theres a lot of famous games that are a buggy mess under the hood, when you REALLY investigate the code. Two famous bugs from Sonic 2 (that most people don't know about):

1: Physics don't apply properly to dropped rings when under water
2: If tails dies (not drowns) underwater, and Sonic leaves the water while he is flying down to Sonic, Tails remains stuck at his underwater speed until the next time he dies.

Both are simple assembly fixes, but the point is, theres a LOT of under-the-hood bugs that users never notice that people like me do.

I'm one of those people who actually finds it fun to decompress a ROM, look at the raw assembly code, and inspect what the devs had to do to make the game work (and trust me, the S/W hacks and workarounds are generally easy to identify).
 
Well according to the front page article here on THG, looks like Windows 8 doesn't really improve performance on the FX 8150 except by maybe 1%, over Win7 with the scheduling patches applied..

IIRC there was much discussion a year ago about 15% or more improvement once Win8 came out 😛..
 
Take a closer look at single threaded loads, they actually improve quite nicely. I want to see those same comparisons made with the 8350.

Also, as a side note, The Win7 scheduler handles the i7 incredibly well. I always had to mask some programs so they used certain cores with the Phenom II. I wonder why that is.

Cheers!
 
Status
Not open for further replies.