G
Guest
Guest
Physix was disabled. These are not benchmarks that will accurately represent the overall real-world experience.
Though the reviewer stated they just wanted to benchmark pure graphics performance, it would have been very useful to run Physix enabled to see what real world performance would look like.
I'm not sure if it was mentioned in the review (I may have missed it), but one of the big changes was a lower penalty hit for GPU to GPGPU context switches.
Personally, I'm thrilled with it. I Have zero intention of buying one for gaming. However, I've got a c2070 Tesla card on pre-order and Fermi is everything we hoped it'd be. I've got a research who is just starting to port his Matlab code to Jacket on the c1060. I can't wait to see what the performance delta will look like. Our hope is that he'll be able to do real-time analysis on a dozen data streems on a single CUDA workstation. We had previously planned to do on a compute cluster.
This is where NVidia is going. It's still a small market compared to desktop video boards but consider this. I've got 200 cluster nodes coming in for one researcher. He's interested in GPGPU coding. If he goes CUDA (openCL) that's over $2000 per card. We're talking $400,000 in sales, the vast majority of which is profit.. for one researcher. Urbana has thousands of GPGPU compute nodes.
ATI has totally failed in this area with FireStream. It's not their cards or their pricing that's a problem as much as their implementation. All our people run Intel again. Core was a game changer and nehalem put the nail in the coffin. Everything we're buying is Intel Xeon.
Firestream is implemented as an extension to the AMD compiler which generates binaries optimized for Opteron. Firestream has been buggy and it runs like crap on Intel systems.
OpenCL still has the potential to reverse AMD's GPGPU fortunes but, for now, they're on the outside looking in.
Though the reviewer stated they just wanted to benchmark pure graphics performance, it would have been very useful to run Physix enabled to see what real world performance would look like.
I'm not sure if it was mentioned in the review (I may have missed it), but one of the big changes was a lower penalty hit for GPU to GPGPU context switches.
Personally, I'm thrilled with it. I Have zero intention of buying one for gaming. However, I've got a c2070 Tesla card on pre-order and Fermi is everything we hoped it'd be. I've got a research who is just starting to port his Matlab code to Jacket on the c1060. I can't wait to see what the performance delta will look like. Our hope is that he'll be able to do real-time analysis on a dozen data streems on a single CUDA workstation. We had previously planned to do on a compute cluster.
This is where NVidia is going. It's still a small market compared to desktop video boards but consider this. I've got 200 cluster nodes coming in for one researcher. He's interested in GPGPU coding. If he goes CUDA (openCL) that's over $2000 per card. We're talking $400,000 in sales, the vast majority of which is profit.. for one researcher. Urbana has thousands of GPGPU compute nodes.
ATI has totally failed in this area with FireStream. It's not their cards or their pricing that's a problem as much as their implementation. All our people run Intel again. Core was a game changer and nehalem put the nail in the coffin. Everything we're buying is Intel Xeon.
Firestream is implemented as an extension to the AMD compiler which generates binaries optimized for Opteron. Firestream has been buggy and it runs like crap on Intel systems.
OpenCL still has the potential to reverse AMD's GPGPU fortunes but, for now, they're on the outside looking in.