AMD CPU speculation... and expert conjecture

Page 77 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

Actually AMD's firepro W9000 is priced higher.

http://www.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265-7.html
http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_4.html#sect1

can't really tell from the 2 reviews as the numbers don't match up on the quadro 5000, but can guesstimate that the k5000 and W9000 are close at some benches.
There are some places where the quadro is better, but I haven't seen a decent review directly comparing the k5000 to the W9000, other than a few that only look at SPEC: maya where the W9000 already loses to the quadro 4000.

As for power, looking at toms and xbit, both the quadro k5000 and firepro W9000 pull a little over 250W, with the W8000 being the efficient card staying under 200W max.

Biggest problem I see with the W9000 is its own competition, the w8000. $3400 vs $1500 at newegg, with very little performance advantage.
 

amdfangirl

Expert
Ambassador


You'll have to excuse my lack of attention to the physics accelerated scene :p.

But just browsing on Wikipedia, the amounts of games support it are quite small in comparison to PhysX and Havok.
 


Well since the Front End is universal from FX to Tamesh there will be across the board improvements through all SKU's
 

Havok is the old man here, and should have more.
nVidias been pushing theirs hard.
AMD, not sure the priority, but since those titles are pre- RR, it makes sense.
If AMD pushes it, per Mr RR, you will see more usage
 
Well, the first (Crysis 3) offloaded work from the GPU to the CPU, so yeah, favors AMD (higher core usage) at the cost of overall performance (work would be done faster on the GPU).

Not true. By offloading that processing from the GPU to the underutilized CPU it opens up more capability on the GPU to process graphics date. This results in better overall performance as those CPU resources wouldn't be doing anything anyway. Of course this only creates a problem if people were pushing "games don't use more then two cores anyway" as their design strategy and purposely restricted their CPU capability.

Remember those extra CPU cores that are under load would normally be sitting there doing absolutely nothing, pushing work onto them can in no way reduce overall performance. GPU resources are not unlimited and is heavily utilizes during rendering, relieving some of those tasks to underutilized an CPU is not a bad thing.

We're going to see more and more of this as time goes on. Programmers will find ways to utilize additional resources, to not do so would risk becoming obsolete. I can see in the not to distant future programs (really games) that profile the current available resources then dynamically decide which distribution of code to use for maximum results.
 


The 7790 reminds me of the 4770 and the 4850. That's a good thing :p

Cheers!
 
CUDA and Physx will loose its product support because it is mutually exclusive and creates a closed market. DC and OpenCL etc encourage more software support, nvidia have a bit of a problem.

Just like OGL beat out D3D, right?

At the end of the day, developers will use whatever is easiest to develop for. And guess what? CUDA/PhysX is easier to develop for then OpenCL/DirectCompute.

Remember those extra CPU cores that are under load would normally be sitting there doing absolutely nothing, pushing work onto them can in no way reduce overall performance. GPU resources are not unlimited and is heavily utilizes during rendering, relieving some of those tasks to underutilized an CPU is not a bad thing.

So basically, you are advocating using the CPU to perform tasks that are computed FASTER when using the GPU?

Sure, OK, you made the CPU numbers look pretty. Yippe. Now someone decides to run a video encoding app while playing Crysis.

...crap, you just killed performance for both tasks because the CPU is doing work that is done faster on the GPU.

What you are REALLY doing is moving the bottleneck to a different piece of equipment. Nothing more. If a task is done fastest on a GPU, let a GPU do it. If a task is done fastest on the CPU, let the CPU do it. Trying to offload processing to a processor that does the task in question SLOWER for load balancing reasons is never a good idea.

Look at it this way: Which resource is showing higher performance margins on a year-to-year basis, the GPU, or CPU? Thats the one that processing should be used on, because that resource will have more margin going forward.
 

jdwii

Splendid


I think your being a bit silly on this gamer on Crysis 3 your GPU is easily the biggest bottleneck and the Intel Quad cores and Amd 8 cores are not bottlenecking but my 6950 on medium-high is my 1100T is at like 40% and no core is maxed out.

This was a smart decision and i hope they do it more often on the GPU heavy titles. Also if someone does video encoding while playing Crysis 3 they deserve their performance to go down.
 


Funny how the multitasking crowd changes their tune. Just saying...
 

how hard would it be to implement dynamic workload shifting between cpu and gpu e.g shifting of floating point intensive loads? i think future consoles will implement something like this using hsa capabilities.

future consoles will be multitasking (not like same prog. using multi-cores) more than their predecessors.

crysis using fx's cores doesn't mean much in the long run. fx8350's min. fps is still barely higher than a core i3's (according to toms' testing). imagine using two 7970's which have decent cpu overhead and playing crysis on top of that. moreover, all game devs have to put as much effort in pc game development as crytek did to push multicore gaming further. by that time it actually happens, steamroller will come out (see, this is why amd users have to [strike]pay moar moniez to amd[/strike] keep upgrading) - which has a better shot at properly beating ivy bridge core i3. even then they have to contend with haswell core i5, however crippled they may end up being. pd sure couldn't beat ivy i5, never even came close. :D

 
It is kind of a false argument when the HW hasn't changed, only its use.
That must mean its being done in SW no?
HW has been able to MT for quite awhile now, weve been waiting on SW, and its here in one form, shows improvement, yet isn't what some thought itd be doesn't matter, its here, it works, and it will only get better
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


A well coded engine should be capable of using CPU or GPU. With how much reuse engines get it is in their best interest to make it somewhat dynamic. Give us a reason to buy the hexa/octa/penta/dodeca core CPUs, or even dual CPU workstations again.

Give us a reason to buy higher end video cards and multiple video cards other than just higher FPS or supporting dual/quad displays.

The PC market is stagnant because there isn't THAT much of a difference between a $800 rig and a $1600 rig.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
Just for you intel guys... I can run SC2 on ultra settings at 1080p with about 50FPS on my Phenom II x4 840, GTS450 512mb, and 1333MHz RAM rig. Lol! This leads me to believe that the first starcraft is just a poorly optimized game, the graphics quality is far better in the second game and my low end rig can run it at 50FPS. For those using Starcraft as their benchmark, it proves nothing about CPUs except the game optimization is terrible in that game.
 

amdfangirl

Expert
Ambassador


Too bad we don't get a die shrink like we did with the 4850 -> 4770






What resolution?
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Yay! My avatar is finally updated!

A family member of mine has Starcraft II running at full settings/1080p just fine on FX-4100, 1600mhz RAM and Radeon HD 6850. Despite all of the people claiming SC2 would be terrible on such a setup, I was pleasantly suprised. But it probably wouldn't hurt to move up to at least a FX-4350 at this point.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
@AMDfangirl, Correct me if I am wrong, but 1080p is 1920x1080.

I did some more gaming... ugh... I mean testing, and I noticed that once I hit the intensive areas, I hit about 37-45FPS and It's pretty consistent at about 40FPS. This is SC2 I am talking about. I would like to test SC, but I don't feel like paying for a game I'm not going to be playing.

Might I also add, SC2 looks pretty gorgeous on ultra!

*edit* @anxiousinfusion, I'm waiting for the benchmarks on the new z87's before I upgrade :D Lol, as I said earlier, it's about game optimization. SC2 seems to be pretty well optimized. I don't think our problem here is the core power so much as it is the game optimization.

I'm also considering buying a HD 7790 just for kicks n' giggles to hold me off till the rumored GDDR6 cards in 2014 I believe.
 
Status
Not open for further replies.