Intel: GPUs Only 14x Faster Than CPUs

Status
Not open for further replies.
Screw intel, come back when you have a GPU which actually CAN compete in the high-performance graphics market!
Better yet, come back with a CPU that performs as well as a GPU.
 
Man, why is Intel always player hatin'? You cant turn a hooker(in this case, cuda) into a housewife(in this case, the possibly borked "benchmarks" that will not be specified) and then tell her she is doing a bad job.
 
yup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.
 
that's like comparing apples to oranges, there are sooo many different types of apps, some will work fantastic on the GPU, some may work BETTER on the CPU. and another thing... ONLY 14x? are gpus 14x more expensive than the 960? whats that? they are CHEAPER?? I think even at 14x they still represent good value....

this just seems like intel bitching after they failed to defeat GPUs with larrabee... we already know all this stuff intel. we know coding for CPU is easier, and we know GPU is only suited to some tasks. stop bitching.
 
[citation][nom]ddkshah[/nom]yup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.[/citation]

nVidia hasn't even managed to get 512 cores working within the PCIe power spec yet... The rate nVidia is going....we'll all need a dedicated 1kW PSU just to run their graphics cards. Intel's most power hungry consumer processor still has a TDP of 130watts....nVidia's most power hungry consumer graphics card has a TDP of what? 300watts??? Honestly, there's no accurate way to perform benchmark comparisons between GPU and CPU. During any attempt, the CPU is running OS, chipset drivers, graphics driver, etc....the GPU is running what? The benchmark app... The only way to even get close to an accurate comparison would be to set processor affinity on the CPU to give the benchmark a single dedicated core and then do the same on the GPU....but that would skew nVidia's highly inflated results. Also...where are these "hundreds" of CUDA based consumer apps that nVidia keeps talking about? I haven't seen any yet....and only a handfull of consumer apps contain any features that make even the smallest use of CUDA...
 
[citation][nom]stingstang[/nom]GPUs don't do the same things as CPUs. That's why we have the 2 different hardwares. GPUs handle only graphics and some with physics. CPUs do EVERYTHING else, including sort out the information thrown at them by the GPU. It isn't fair to compare these two. That's like LAN cards and sound cards.[/citation]
stingstang,
GPU's especially Nvidia, GPU's at this point to more than graphics and some physics applications. There's hundreds of scientific applications that see massive speedups vs. x86 CPU's. Manifold GIS has speedups of nearly 300x processing digital elevation models (think x,y,z raster data i.e. grids). It's always been the case that not all code can easily be parallelized nor when it is, is it necessarily faster. But each passing year, parallel GPU based computing sees massive improvements in performance, just like GPU's for gaming vs. Intel integrated graphics.
 
Intel says it's 14x, NVIDIA 100x.

I'm guessing the real numbers are somewhere in between the two, considering the natural bias of both companies.
 
Things that require lots of relatively simple processes run well on a GPU, while things that have complex processes run better on a CPU. At least taht seems to make sense to me.
 
Meh, CUDA is a parallel system and our current processors are serial-based.
 
If CUDA was so wonderful, we'd be inundated with apps that make computing so much faster, where are they all? It's been around for 3 years and we have what? A broken video encoding app and a tiny bit of support from Adobe and that is about it.
Where are the CUDA versions of Winrar, of Bluray encoding, of, well, anything useful?
 
I read this stuff and wonder if people have any real understanding about what they're posting.

You can't compare a general purpose CPU with a GPU and say on optimized workloads, the GPU is a lot faster. Duh!

A GPU is child's play to make compared to a CPU like the Nehalem, or even the AMD stuff. They are much less complex, and just a lot of the same thing over and over again. If you've got a workload that can use that, then it's going to be fast, but most workloads aren't so easily parallelized.

CPUs are much smarter, and much more useful for most people. They schedule much better, they predict branches much better, they run single threads a Hell of a lot better because of these things.

If Intel didn't need to worry about thread level parallelism, they could save a ton of transistors and use them for more parallel loads. And you all that discredit them would be whining because the computer was a lot slower because of it. Single threaded performance is still the most important thing for most apps, and this becomes more so as they add cores and give it more multi-threaded power. Some apps can use a really simple setup like a GPU well, but, if they can't, you'd suffer badly with it. Not only because the single threaded performance is so low on a per cycle basis, but because the clock speeds are horrible too.

They are both better at what they are made for. But, that should be obvious without even having to say it.
 
14X? If every new generation was 2X as fast as the previous one, and took 2 years to come out..that's like 8 years just to catch up!

2 years = 2X, 4 years = 4X, 6 years = 8X, 8 years = 16X
 
GPUs are only parallel processing. CPUs have far more complex instruction sets, capable of handling numerous types of codes. The reason CPUs can't run as quickly as GPUs is because they're much more complex.

Its like expecting a multi-purpose Swiss Army Knife with a saw blade to cut wood as quickly as a chainsaw. It might be 14x slower, but it can also do a lot of things a chainsaw can't do. I find the toothpick tool especially handy.
 
Nvidia: We charge 500$ for a GPU capable of outperform CPU 100X in scientific applications, and the user can connect 3 or 4 GPU on a desktop computer.
Intel: We charge 1000$ for a GPU that is ONLY 14X slower than a GPU, and you pay extra for a 2 socket motherboard, and a windows license for each CPU.
apple: We charge 2000$ for the same CPU. You can connect only one GPU to it, without CUDA, and it works slower than under windows. ¡But it's a Ferrari!!!! ¡For scientific applications! Proof, look at our magazines photos!!!
 
[citation][nom]captainnemojr[/nom]14X? If every new generation was 2X as fast as the previous one, and took 2 years to come out..that's like 8 years just to catch up! 2 years = 2X, 4 years = 4X, 6 years = 8X, 8 years = 16X[/citation]
Except that on the entire history, GPUs increased 2X each year, and CPUs only 1.5X each year, so in 8 years, GPU turns (2/1.5)^8=10X faster than CPUs (1.5^8)=25.6.
.
In other words, in 8 years GPUs increase by 256X.
.
(In reality, those were historical trends, and the power restrictions had reduced the rates of speedup, but still GPU are exponentially growing over CPUs)
 
"the average CPU only has six cores"

*sighs* Rich, privileged people... surrounded by the latest tech like a fish surrounded by water. Last time I checked, the only commonly available hexa-core processors were AMD's $300 45nm 1090T and Intel's $999 32nm i7 980X.
I have yet to see a system that has either of those in real life, especially since another article on Tom's reported an alleged supply shortage of 1090Ts.
 
Wow...

Too much fail in this comment section.. where to begin.

nVIDIA's GPUs are not as complex as Intel CPUs (or AMDs CPUs for that matter). How to explain... hmmm... GPUs are arranged in groups of SIMD units, each group has the ability to issue an instruction to the SIMD units within that group. These groups are known as SMs for nVIDIA (Shader Modules/Multiprocessors). I believe there are 16 SMs found in Fermi.

If one was to compare Fermi to a CPU one would have to conclude that Fermi is only a 16 core CPU. Now each SM supports an instruction stream and is apparently dual issue (though nVidia won't disclose the depth of the queue).

What you have here is that highly threaded applications, which are simple in nature, will do better on a CUDA architecture based GPU than on a CPU (FP32).

For FP64, the roles kind of reverse with CPUs being far more efficient for the task in terms of performance per watt (though Fermi has made some advances in that department).

All in all you cannot compare the two as even for CUDA, the CPU acts as the Command Processor which issues commands and sends work to the GPU and as such.. GPUs cannot (CUDA that is) operate without a CPU present.
 
Status
Not open for further replies.