AMD ''Vishera'' FX-Series CPU Specifications Confirmed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]palladin9479[/nom]Not really relative. Those are all multi-threaded tests and thus gain from having as many CPU targets as possible. Disabling cores would generally be a bad idea in those scenarios.What me and Blaze are talking about is single threaded or lightly threaded (one main thread and a few secondary threads) applications that do not make use of more then two cores, aka "Games". The above tweaks allow you to give maximum resources to a single thread while also allowing you to clock it much higher then what would normally be allowed by the TDP window. It sacrificed multi-threaded performance for a large increase in single threaded performance, which is the one area that everyone is complaining about. BD is already good at heavily threaded apps, people are angry because they can't get higher "fps" in some program and want to know why.This makes absolutely no sense ... you can't "disable ALUs" ... only cores. Each BD core has two ALU's and two AGUs inside it, those units process instructions fed to a single register stack. Phenom II / Intel Core each have three generic ALU's (combined ALU / AGU) per core, the Phenom II feeds one register stack per group while the Core with HT can feed three with two register stacks. CPU performance does not increase linearly with increased ALUs as x86 is not a multiprocessing friendly instruction set, going from two to three ALU's provides a small if any performance increase, especially if those ALU's also pull AGU duty when required. Intel got around this by using HT to allow two streamss of instructions to operate on three ALU's and thus each stream would get 1.5 ALU's worth of performance, which is more efficient then trying to keep three ALU's busy with one instruction stream. AMD did it's own thing and decided to make two ALU's per instruction stream and make a real core vs an abstracted core. Back end L2 caching and instruction decoding arbitration limit the viability of their design, the two cores fight each other for access to the shared L2 cache and then only when their not waiting for the decoder to give them instructions.By disabling one "core" per module you effectively have 50% of the theoretical processing power yet you also remove a limiter to the first core. So while total theoretical processing power goes down, actual realized processing power goes up. One 8xxx CPU has eight "cores" for a total of 16 ALU and 16 AGUs, there are very few consumer orientated tasks that require that much horizontal computing power, yet there are many tasks that could benefit from four "cores". Thus you can manipulate your processing power to best suite your needs, as wide or as narrow as you desire.Welcome to my work, try not to go crazy...[/citation]

His link is relative, and I think you need to take a look at all the modern games released today. All games released today are threaded very well with rendering. You will find that having 8 cores is beneficial to gaming, and disabling half your cores amounts to shooting yourself in the foot. See the recent Guild Wars 2 CPU benchmark, for the latest example of this. The only games that are single or dual threaded as you state are games that are too old to even matter. These processors already run old games like that well enough. I can't find many instances in my life where I actually need single threaded performance, You are better off using process lasso to autoset priorities and affinities for your 'single threaded tasks' than disabling cores that actual programs would have otherwise used. I have tried this disabling half cores to see how much impact it would have on gaming, and as a serious gamer with a huge list of 400+ steam games, I can tell you that disabling half the cores always reduced performance.

There are plenty of consumer applications that benefit from 8 cores, much moreso than four. First and foremost is the ability to multitask, I know a lot of people who do some serious multitasking on their systems. Having extra cores means the cores that are idle can instantly jump up and start working on a new task if the user decides they now want to juggle several applications. Another benefit is faster loading of software, especially loading of the OS upon startup, which will use all available cores to launch a piece of software, especially on Linux. Finally, I believe games count as consumer applications. Quad cores aren't cutting it anymore.

http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
 
I'm greatly looking forward to this. I won't let any hype take me down, I'll assume nothing until it is released and tried. Hoping to have the cash to get one when the time comes too.
 


The single threaded comment as in reference to all the i2500K vs fx8150 bench's that involved one to two CPU cores. Games made from 2011~2012 and beyond are starting to get four cores and thus do benefit from being multi-threaded.

In either case, you can still get greater performance as no current game use's eight cores. By going from 8cu/4m to 4cu/4m you reduce your total processing power by 50% yet increase the efficiency of those four remaining units. If the program your using only use's four cores then you'll get higher performance then with all eight turned on (yet only using four).

It's all situational and requires the user to actually understand processing metrics and how to optimally allocate processing power.
 
yes games start using 4 cores, but u often see the first 2 cores have much higher usage than the other 2 due to poor optimization. This is part of the reason why u see i3 occasionally outperform FX8150 in some games.
 
The flagship FX-8350 is listed with a 4.00 GHz clock speed (4.20 GHz TurboCore speed), eight-cores, and TDP of 125W.

AMD 8350 = AMD 8150 overclocked to 4 GHz, without radeon cores lo make it posible, (which makes sense, nobody, i mean, nobody buys such a CPU and play with "onboard" graphics..

Let's see how piledriver works this time...
 
[citation][nom]juan83[/nom]AMD 8350 = AMD 8150 overclocked to 4 GHz, without radeon cores lo make it posible, (which makes sense, nobody, i mean, nobody buys such a CPU and play with "onboard" graphics..Let's see how piledriver works this time...[/citation]

Piledriver is a pretty big improvement over Bulldozer even at the same clock frequency. Increasing the frequency was just AMD showing off that they not only improved performance at the same frequency, but also managed to decrease power consumption at a given frequency.

[citation][nom]mmstick[/nom]His link is relative, and I think you need to take a look at all the modern games released today. All games released today are threaded very well with rendering. You will find that having 8 cores is beneficial to gaming, and disabling half your cores amounts to shooting yourself in the foot. See the recent Guild Wars 2 CPU benchmark, for the latest example of this. The only games that are single or dual threaded as you state are games that are too old to even matter. These processors already run old games like that well enough. I can't find many instances in my life where I actually need single threaded performance, You are better off using process lasso to autoset priorities and affinities for your 'single threaded tasks' than disabling cores that actual programs would have otherwise used. I have tried this disabling half cores to see how much impact it would have on gaming, and as a serious gamer with a huge list of 400+ steam games, I can tell you that disabling half the cores always reduced performance.There are plenty of consumer applications that benefit from 8 cores, much moreso than four. First and foremost is the ability to multitask, I know a lot of people who do some serious multitasking on their systems. Having extra cores means the cores that are idle can instantly jump up and start working on a new task if the user decides they now want to juggle several applications. Another benefit is faster loading of software, especially loading of the OS upon startup, which will use all available cores to launch a piece of software, especially on Linux. Finally, I believe games count as consumer applications. Quad cores aren't cutting it anymore.http://www.tomshardware.com/review [...] 268-7.html[/citation]

Can you tell me of even one game that can use a full 8 cores, let alone use them well? The only game that I know of that would come close is BF3 MP and it maxes out at six threads if I remember correctly (although technically, it has two threads for the game engine and four for the online players and some games have four for the game engine, so there the next BF game might use 8 threads if they update the engine). This is why the i7s don't actually beat the i5s by much at all (heck, the cache probably made a bigger difference than the Hyper-Threading did) in gaming.

Also I gave process lasso a try for a while. It turned out to not be intended to be free (although I suppose that I could have tricked it into re-setting its trial time whenever it expired, but that would be foul play against the devs). There are many non-gaming tasks that can and do benefit from eight and more cores, but gaming isn't one of them and many consumer tasks simply don't either. Heck, many professional tasks are single-threaded strictly because they can't be made to run on more threads due to too many dependencies (although a Bulldozer module might be able to do something about that, but who knows).

Also, disabling half the cores and disabling one core per module are two very different things. Disabling half the cores (IE two modules) doesn't grant the significant performance per Hz benefit that disabling one core per module does and instead basically gives you a 4100 with different Turbo frequencies. It forces the background tasks to run on the same cores as the game without giving them the performance boost and that's a bad idea.

Also, a big part of this and what you said is actually giving AMD an advantage in this situation. I can always take an FX-8120 or 8150 et cetera and reverse this trick, but you can't make an i5 into an i7. So, for as long as single and lightly threaded performance is more important than highly threaded performance, I can easily compete with Intel in that with my little tricks and then switch back hen highly threaded performance is more important. Now that is some serious future-proofing that Intel doesn't offer.

EDIT: Especially considering that by the time eight slow cores are better than four fast cores, Windows will already have been improved in its thread scheduling, so you'll get most of the disabling performance benefit without sacrificing the full core configuration. Overclocking would still not be as high and resource sharing would still be going on, so although it is an improvement over stock, lightly threaded performance would still be better on the full disabling method, but if games and such can use eight cores very well, then that's not a big deal.
 
[citation][nom]spookyman[/nom]And yet they are still slower then the i5-2500k.[/citation]

it will probably depend on which standpoint and driver point in time it is. Part of me wants to say it will probably beat it in overall professional work time if amd pulls it off nicely. It'll still be a bit of time before it catches a tiny bit on the gaming front.
 
[citation][nom]mmstick[/nom]His link is relative, and I think you need to take a look at all the modern games released today. All games released today are threaded very well with rendering. You will find that having 8 cores is beneficial to gaming, and disabling half your cores amounts to shooting yourself in the foot. See the recent Guild Wars 2 CPU benchmark, for the latest example of this. The only games that are single or dual threaded as you state are games that are too old to even matter. These processors already run old games like that well enough. I can't find many instances in my life where I actually need single threaded performance, You are better off using process lasso to autoset priorities and affinities for your 'single threaded tasks' than disabling cores that actual programs would have otherwise used. I have tried this disabling half cores to see how much impact it would have on gaming, and as a serious gamer with a huge list of 400+ steam games, I can tell you that disabling half the cores always reduced performance.There are plenty of consumer applications that benefit from 8 cores, much moreso than four. First and foremost is the ability to multitask, I know a lot of people who do some serious multitasking on their systems. Having extra cores means the cores that are idle can instantly jump up and start working on a new task if the user decides they now want to juggle several applications. Another benefit is faster loading of software, especially loading of the OS upon startup, which will use all available cores to launch a piece of software, especially on Linux. Finally, I believe games count as consumer applications. Quad cores aren't cutting it anymore.http://www.tomshardware.com/review [...] 268-7.html[/citation]

I play a lot of Steam games, I can tell you that anything Source based doesn't benefit from more than 2 cores, 2 and a HALF tops. Starcraft 2 is CPU limited, but doesn't actually use more than 1 core (as per task manager).

For compilation under Linux/Windows, I use GCC. My non-threaded code runs faster on Phenom II than BD but slower than on Sandy Bridge (a lot per clock).
 
[citation][nom]palladin9479[/nom]The single threaded comment as in reference to all the i2500K vs fx8150 bench's that involved one to two CPU cores. Games made from 2011~2012 and beyond are starting to get four cores and thus do benefit from being multi-threaded.In either case, you can still get greater performance as no current game use's eight cores. By going from 8cu/4m to 4cu/4m you reduce your total processing power by 50% yet increase the efficiency of those four remaining units. If the program your using only use's four cores then you'll get higher performance then with all eight turned on (yet only using four).It's all situational and requires the user to actually understand processing metrics and how to optimally allocate processing power.[/citation]

Wrong. Battlefield almost maxes out all 8 of my underclocked 8150. I have it running at 2.8 Ghz with about 90% usage on all 8 cores and runs butter smooth.
 
[citation][nom]juan83[/nom]AMD 8350 = AMD 8150 overclocked to 4 GHz, without radeon cores lo make it posible, (which makes sense, nobody, i mean, nobody buys such a CPU and play with "onboard" graphics..Let's see how piledriver works this time...[/citation]

The 8150 didn't have any radeon cores.
 
[citation][nom]blazorthon[/nom] Increasing the frequency was just AMD showing off that they not only improved performance at the same frequency, but also managed to decrease power consumption at a given frequency.Can you tell me of even one game that can use a full 8 cores, let alone use them well?[/citation]

BF3 maxes out all 8 cores of my underclocked 8150.
 
[citation][nom]ashinms[/nom]Wrong. Battlefield almost maxes out all 8 of my underclocked 8150. I have it running at 2.8 Ghz with about 90% usage on all 8 cores and runs butter smooth.[/citation]



Can you give more information before declaring that? Are you running any other programs while you're gaming, what background programs do you have running, what OS are you using, and can you give us screen shots of CPU usage before, during, and after gaming (maybe on another monitor, IDK) of the core utilization of all cores in your CPU? Did you take into account that with that much of an underclock, even many background things such as Windows itself, AV and related programs, and much more can have a significant impact on CPU utilization? What maps are you playing and how many players are on them?

Also, 90% is not maxing out. Assuming that you're playing multi-player, that's up to six cores that the game alone can eat up. Windows and such need to run on something, so they probably take most of one core and some of another (or split evenly between them, w/e). Point is that just because your utilization is that high, that doesn't necessarily mean that BF3 is maxing out all of your cores. I know for a fact that it only has up to six threads (two for the game engine and four to handle the other things such as players).

You colud test this by overclocking rather than underclocking to minimize the impact of background tasks and after that, you can also temporarily disable one module to get three dual-core modules and compare gaming performance. Considering that I already know that the FX-8150 can't even beat the FX-4170 at stock in most games and Tom's has proven that much, I'm quite sure of what I'm saying. Also, considering that the FX-8150 has almost as much performance when all eight cores are stressed as an i7 does with all four cores and virtual threads stressed, you'd be beating the i5s with that CPU (even at similar CPU frequencies) and I find it unlikely that you're beating the i5s in the full core configuration of an FX CPU in any game. It would need to use eight threads effectively to do that.
 
@palladin and @mmstick, i was kind of agreeing with both your points...sorry if that was lost in my post

palladin says giving max resources to a single thread, by disabling cores within modules, will allow the single thread to fly - namely games. I agree with that.

the following links seem to support this...

http://techreport.com/articles.x/21865

and http://www.hardware.fr/articles/842-9/efficacite-cmt.html

i am less clear abt wat mmstick is saying but i *think* he says more BD cores are needed to run more threaded integer loads, which is again correct, proved by various reviews and by the French page link I give

i wish there was an automatic way for software to use BD's strengths optimally - assigning only 1 core per module till all modules have 1 core working and then - if there are more than 4 threads - proceed to fill the *other* core in every module.

hope i made sense
 
Seems like after waiting more than 1 year I can finally sli my 680, I just hope the price doesn't go so up, if it's done correctly this will mean Intel is gonna have some nice competition again, which could be good for intel fan since maybe they'll lower their prices
 


That can be done. Optimal thread-scheduling is more than just possible. Windows 8 seems to be introducing it and Linux has already had it for a while. It's not as good for single and lightly threaded performance as outright disabling cores, especially because of thermal headroom (more important than the total elimination of resource sharing), but it is a good improvement over the stock core configuration in pretty much everything. It improves single and lightly threaded performance while not hurting highly threaded performance (probably even helping it a little in many cases).

The ultimate Turbo functionality would be letting the CPU be able to and know when to switch between even this improved threading and the full disabling state, maybe also intermediate levels between them. The CPU could then Turbo lightly threaded work by disabling cores that aren't necessary and unlike normal Turbo, it would even reduce power consumption. The modular architecture has a lot of possibilities.
 



It's easily done in Solaris and practically required for high performance in a NUMA setup on SPARC. I can configure thread homing which tells the kernel scheduler to attempt to return the thread back to it's home processor / core unless there is absolutely no way as the target is too busy. I can also configure the duration of a specific cores time slicing and give priority's to the home threads.

In NT this isn't nearly as easy. CPU affinity bit is the best MS has for controlling which CPU a thread gets assigned to. If you don't do this then you get the frog effect where threads are constantly being moved around. You can see this in real time if you want. Open up task scheduler and go to performance and look at the CPU graphs. Start up a demanding app (superPi comes to mind) that is only one thread, turn it on full blast and watch the CPU graphs. They'll each keep spiking as the thread gets moved around and instead of one core at 100% for the duration of the benchmark you get four cores at 25% averaged. This has the very negative side effect of playing havoc with L3 / L2 caches as each time the thread is moved it's cache data gets invalidated which in turn cause's excessive cache look-ups to an already taxed shared back end L2. CPU affinity fix's it onto a single core which lets you power down other cores to further up-clock the core your using. It's how "turbo boost" is supposed to work, provide the OS doesn't have a psycho scheduler.

The whole argument about disabling cores is really about tailoring the CPU to you needs. I actually don't recommend disabling every odd core if you plan on running anything that requires more then four cores. If single thread performance is important then you can disable the very last core (7) and use the affinity trick to location important single threaded tasks to core 6 which doesn't share resources like it's siblings.

I've been itching to get my hands on a BD but convinced myself to wait for PD before I started tinkering, I don't like first gen products as a rule of thumb.
 
Go AMD, give Intel some competition. It is really bad that even the highest end AMD processor right now gets beaten by the SB i5s in mainstream applications (correct me if I am wrong).
 
AMD doesn't seem to be able to get past 4.2GHz even with the PileDriver cores. That's a pity especially since they originally intended the FX lineup to run at high clocks. Also, I don't understand how it's a 'plus' if the new FX chips support all those fancy AMD stuff such as Perfect Picture HD and AMD Steady Video 2.0. I mean, does this mean I have to get an FX to get those features? Does it work with either an AMD or Nvidia graphics card as long as I have an FX? So if I get an Intel CPU, I won't be able to get those? Isn't that artificially limiting the capabilities of the system so you get an AMD CPU and GPU? Aren't those nifty features a function of the GPU and not of the CPU?
 
[citation][nom]idroid[/nom]Come on AMD!!! this is the last chance you have to beat intel and stop it from making a monopoly![/citation]

Wake up ! The option to choose between AMD, Intel or Nvidia, does not constitute a free market. They all come from the same source.

As if what I know about cpus wasn't enough, I became convinced after Nvidia released the GTX 680 and tied down the shader speed to match the gpu core. Then I LOL'd and cried some more, 😛 , when AMD came out with the GHZ edition and performance boost.

If there was competition, high tech would have alot more variety and be far more advanced by now.

pEACe
 
Status
Not open for further replies.