Nvidia VS Intel: CUDA, Larrabee and some Whoop Ass

dattimr

Distinguished
Apr 5, 2008
665
0
18,980
Well, it looks pretty obvious that the real clash it yet to come - Larrabee is indeed just a PowerPoint slide up to now. But, with all this recent confrontation between the Blue and Green giants, I doubt that the Green ones will get a QPI licence to manufacture chipsets for Nehalem. :sweat:

And their market share isn't very promising. :sarcastic:

Yet, with all this multi-core and many-core marketing, what if you could encode your video 10-20x faster with your Nvidia-based Card than with your Quad-Core? :ouch:

One software application that was shown off during the Analyst Day was RapiHD – a video encoder designed to run on the GPU using CUDA. Sam Blackman, CEO of Elemental Technologies, the company behind RapiHD, showed a GeForce 8800M GTS running an h.264 video encode completing around 10 times faster than a quad-core Intel CPU. The software will be available in August or September and will be price competitive with current video encoders – expect it to be around $50 US for the standard version.

Nvidia Analyst Day: Biting Back at Intel

http://www.bit-tech.net/hardware/2008/04/14/nvidia_analyst_day_-_biting_back_at_intel/4

Oh, and there's still the PhysX thing within GeForce's heart: :love:

In order to demonstrate the physics horsepower of the GeForce 8800/9800 series, Hegde took aim at Intel's eight-core Nehalem particle demo, which can be seen in one of our IDF articles. While Intel's Nehalem demo had 50,000-60,000 particles and ran at 15-20 fps (without a GPU), the particle demo on a GeForce 9800 card resulted in 300 fps. If the very likely event that Nvidia's next-gen parts (G100: GT100/200) will double their shader units, this number could top 600 fps, meaning that Nehalem at 2.53 GHz is lagging 20-40x behind 2006/2007/2008 high-end GPU hardware. However, you can't ignore the fact that Nehalem in fact can run physics.

Tom's Hardware :whistle:

Maybe next time I'll feel like buying a GeForce GTX instead of a dozen-cores CPU. What about you? :hello:
 
This is either a turn away from the norm, or a possible huge failure from Intel. I know that Intel is currently dull on decent drivers, even for their lowly igps. Once they reach the dicrete market, those drivers become much more demanding. In order for Intel to fully enter the graphics market, theyre going to have to really spend some money doing so, with no returns for awhile. Im thinking they want to divert the direction of graphics as we know it (rasterization) to their more cpu friendly raytracing. For them it would be prudent, with less of a commitment in dollars. This is going to be an interesting situation, and dont forget AMD/ATI. If there is ever going to be a change in how we do 3D, then it should be done on one standard, not completely influenced by one company alone. We will see what the future holds
 

DarthPiggie

Distinguished
Apr 13, 2008
647
0
18,980
NV VP said that Intel is cared because we dont need CPUs to be any faster then they already are. It sounds weird but i doesnt make sense to me. He said that a Faster GPU will yield better performance or something. Anyway what I think he means is that there will be a point when we wont need to upgrade our CPUs anymore.
 
Meh, still a long well from replacing a CPU, mainly very niche applications whereas the Nehalem et al can do anything X86 & x64 , like run any version of windows, or OS2 or Linux. Get all of that on the GPU fine, until then it's just a really nice co-processor that still needs and X86/X64 core, and we all know who owns that license.

Anywhoo, good thing however it's about Dang time they started using the processing power of the GPUs to do video transcoding. X1K promised it, it looks like CUDA and RapiHD may finally have delivered it.
 
I like the idea of spare geforce video cards running physics :D

we gotta remember nahlem was just cpu cores, the platform was supposed to allow other specific cores for different tasks to be integrated into the cpu (eg larrabee) - intel may be on to something there, if they get it right.

Picture a core that is scaled down frequently (90nm, 65nm etc) and flexible enough to add or remove cores, and designed by the same engineers working on 3ghz+ processors, cooled by decent 100+w coolers with direct and low latency cache, and direct access to the cpu, intel may continue to ride the perfect storm and make this a sucess.

if they manage to break even with mid range cards in the market (nvidia 8600, maybe even 9600's etc) then they have it in the bag - game designers will end up designing games with it in mind

all pure speculation tho, intels last video card (740) was ok but never made a massive impact like 3Dfx did with 3D or anything

just think, Intel may bring there own SLI/Crossfire enthusiast platform - dual sockets with dual-tri channel memory, dual larrabee cores consisting of multiple cores per, anything could happen if they get it right..........

AMD atleast has an advantage here - they have a fully operational video card company/wing of the company to help them.

Money vs Know How?

Gotta remember, Intel still is the biggest supplier of video chipsets, integrated they may be but they can do it. :lol:

Hope they dont call it Intel Extreme Graphics X3100 or some lame used name they always use...
 
Too bad they cant do DX10 on their igps. Or really run games for that matter on their igps. I wondering if this can go the other way? Say, using a huge, fast gpu for the most part, and putting in a lil cpu thats fast for the easy stuff like OS and such, providing they make software for other programs that uses floating point to be run directly off the gpu, then there would be little need for a cpu. The cpu then could be used for just getting info for the gpu, kinda like slaved to it. If they can do it for .264, what about other programs? Surely it can be done, and it makes sense too. Being that cpus are slow compared to gpus, just let them direct, assemble to the gpu for the fastest thruput
 

Andrius

Distinguished
Aug 9, 2004
1,354
0
19,280


Your "host CPU" would function much like a memory controller and operation decoding engine in today's CPU.
It figures out where(which part of the CPU) to put incomming data to get the job done.
Certain applications favor GPU - very specific workload processor (video encoding, 3D games, CAD).
You could say it's just a high performance parallel FPU outside the CPU. The GPU is quite useless in serial processing.

Rewriting software (OSes) to work with GPUs would be difficult. Most OS code is highly serial and not FPU friendly.

What needs to happen for future GPU over CPU dominance is a "Virtual Machine" extention/translation engine
that can use software written in a common language and use any hardware available to execute that software.
Good luck with that.
 
What Im hoping for, is not a cavilier attitude from any of the players, and the best solutions regarding all of them, hopefully a common standard that lets all parties involved left with the ability to compete and contribute. To me this is the best for them, and us as well. This is a situation for strange bedfellows, as each fanboy aligns to their best scenario. I know what I typed is highly unlikely, but we as consumers would truly benefit from greater competition. If we let company A dominate all the others, then we as consumers are left with much less than we have now. Apply company A to any of the current players as you see fit. Im not a fanboy, more a anti monopolist
 

dattimr

Distinguished
Apr 5, 2008
665
0
18,980
Agreed. :wahoo: I do not care if my CPU is from Intel or AMD, nor if my GPU comes from Nvidia or ATI, however, if both solutions start offering a decent price/performance according to my needs and expectations I will pursue other reasons to go with brand A or B. In this case, I would check the driver support, game compatibility AND I would end up minding about those "unfair tatics". I mean, I know that after all everything means $ for them, even if some of them mind a litte more about the end-user experience and the reliability of their products, but their main goal is still profit. But I won't give them two thumbs up to let them do whatever they want just because they're great and they're are on the lead. :non:

But I also must say that I'm quite impressed about the potential of CUDA. :pt1cable: And since ATI doesn't have either PhysX or the almighty Borg Queen (= leadership plus evil)... :heink:

I guess I'll wait to see if the GT200 turns out to be a 55nm shrink (probably, since the 9800GT and its G92b will be a 55nm "test") and see the probable battle between ATI's GDDR5 vs Nvidia's 512-bit bus (if the FUD is true). :whistle:
 

dattimr

Distinguished
Apr 5, 2008
665
0
18,980
And now there is this new "Tegra" brand thing, registered by Nvidia. Its only description is "G & S: Integrated circuits". :sweat:

Any ideas? :bounce:
 

IndigoMoss

Distinguished
Nov 30, 2007
571
0
18,980



I don't think Nvidia is going to die off. This is like the whole Playstation 3 thing. They thought they were the biggest, baddest person in town. So they figured they could just crap all over everyone and then the underdog comes out of no where and releases something that can and does compete with them. So now everyone is going against them, because there is no point in taking their crap when you can get something better, for cheaper. So they'll be selling at a loss, and 3rd parties (partners in this case), will be going either multi-platform or switch sides all together. I think Nvidia will come back from this, much like the PlayStation 3 is now. While it won't beat AMD this generation, it has a shot at impacting them next generation, especially if AMD goes the way Nvidia did.