Speculation: Can cpus keep up with the new Gpu?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We will know that cpus arent a bottle neck soon, I have a feeling. Crysis will be playable soon, and then we will see if the cpu is becoming a bottleneck. I agree marv, youre right, for now. Im saying with the next arches to come out, which may double the output on todays current gpus, which should come 18 mpnths from now. It could become a problem if multithreading isnt done. 18 months from now, games should be more demanding etc. Anywho, thats my take
 

area61

Distinguished
Jan 8, 2008
280
0
18,790
if the game developers code the games to take advantage of multi cores,then youll have your answer.multicore is here to stay.whether speed or more brains will be in the hands of the developers
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


This just came confirming what i was suspecting all along. Maybe a really big can of whoop ass is going to be delivered, and im not talking IGPs here. This show Skulltrail being beaten by a 8600GT. Thats a 4000$ vs 100$ comparison.

Maybe Nvidia is going for the server market here. I can see where this may apply. Structural Calculation (Civil Engeneering) for once. Heavy Rendering by CAD alike softwares. The possibilities are endless. Because the truth is this one. Unless we are gaming, upgrading the GPU wont do much good. Of course we gamers already know that our GPUs are much more powerfull than our CPUs. This CUDA thingy is just taking the bar a few notches up. A freaking good few notches up.

After seeing this, im seeing a Barcy with NVIDIAs in SLI replacing several branches in the server/workstation market. Intel already said it wont use SLI with Larrabee.

Maybe a good can of whoop ass will be delivered. It will be fun to see. Ill be delighted.
 
If we stop and think, since day 1 the cpu has gotten all the input from devs concerning desktop, apps etc on x86. The server market is always the beggining of things to come on desktop. Give 25 years worth of growth towards the gpu using CUDA type software, and who knows how itll all turn out? Even running at 20% capability a gpu is still faster than cpus on many things. I know theres limitations, but who really knows the true end of those limits? If we take what is known now, sure the cpu looks great, but I think the gpu will have its day, Intel knows this, and so does AMD/ATI. Let Intel slam gpus all they want while they invest their billions in them heheh To me, who looks the fool? Cause Im not being fooled
 

area61

Distinguished
Jan 8, 2008
280
0
18,790
the conventional way of cpu and gpu will exsist.Larrabee was ought to stop this conventional method,but its not going to.probably in the discrete section but not the midrange and mainstream.but even in the discrete section,they will be easily pawned by AMD.afterall, AMD's 780g can handle DX10 in vista with aero aswell.my guess is that by the time larrabe rools out,AMDs solution would have grown leaps and bounds.wise men say plant your seeds now and wait,you will be rewarded.
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


Today im having a really slow day at work (and i mean freaking slow). I have enough time atm,so, me and some other my mates we took a peek at CUDA and Ati Stream Computing.

As i said, its being a reeaaallly slow day here at work. So we are having fun reading C++ code, dependencies, libraries, well, anything to keep the mind ocupied. Check for yourselfes if you have enough time :

ATI/AMD:
http://ati.amd.com/technology/streamcomputing/sdkdwnld.html

Nvidia:
http://www.nvidia.com/object/cuda_get.html

Nvidia one seems a bit more structured and has a less beta feeling. Ati one already has the basis to take off. From what ive read and tried so far, both are very promising. They directly inflict the kernel and send instruccions directly to the GPU.
Honestly, i think or Intel takes pulls a full house, or will lose loads of market on this one.

Im not a professional coder, i just know my bits and tricks. This seems very powerfull. If Adobe (for example) adopts CUDA or Ati Stream (or both), we will see the big workstations with **** X2 or Core2 CPUs. And big GPUs doing the work.
We didnt tested here at work, but mate, ill try to mate a few tests when i arrive home. Too bad my X800XT isnt included. Grrr. Ill try anyway. Bof, a prime number auto-generator in C++ will do the trick. Ill match my X800XT with my 4800 X2. Lets see who bytes the dust.

Ill try to post the results later on. If anybody knows a bit of c++ (all the libraries point there) put a Intel Core2 Vs a Nvidia 8xxx. Use a prime number generator code. its easy to code and tis basicly floating point calculations when it starts hitting the big numbers.


Edit: Typ0s everywhere and massive hype from a extremely bored worker atm.
 
Heres another interesting read, going contrary to many posts found here on the forums. http://www.guru3d.com/article/cpu-scaling-in-games-with-quad-core-processors/11 If your rig falls within the guidelines shown on this link, or on Toms article, then cpu isnt a better scenario. Its gpu thats best for your gaming. The conclusion at Guru3D is the same as Toms, if your cpu is running at 2.6, it cant help any further. Thats with todays cards. The G280 is going to change that, and soon
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
The next big jump is for ray tracing to replace vertex and pixel shaders. It cannot be done on the GPU, but will require major advancements in CPU power.
 
If AMD can survive, then yes, this situation actually gives AMD more life, as they do have a huge lead in their graphics division. Also, even tho Intel has had the money, AMD had been actually working on theirs longer, having ATI around, so that helps alot
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


It doesnt ? The way intel is trying to do it is by muscleing their way through with many-core arquitecture. We arent talking about a optimized solution, we are talking about pure muscle (or loads of cores, witch btw, are quite poor solowise).

Note: do you really think they can make a cpu with 80 cores flawlessly ? Hell no. They will slap several versions (40,50,70,80 cores) with disabled (read damaged) cores. And even so, it will be are to produce so many cores flawlessly. I think its not short of a utopia.

When Netburst came out, they were talking about a 10 Ghz CPU. Netburst proved to be a bad arquitecture.
They will say Nehalem's arquitecture will get as much as 80 core. I think history repeats itself.
First of, i have yet to see a CPU doing a decent job in the GFx side. The links i provided before (from ATI and NVidia) proves that is easy to port most apps, so they, instead flooding the CPU, use the GPU in another tasks.

I dont consider x86 and x64 dead yet, because honestly, there is already too much software coded for it. But i think this CUDA and Ati Stream, may lead to a breakthrough in software development, that will have first on a higher level ( Workstations/servers). They will adopt it first due to sheer perfmonce leap.

ATI/AMD will survive because it has a freaking plataform. (CPU/GPU/NB,SB)
NVIDIA will survive because you just need to slap a cheap (x86 or x64 CPU) to use the monster gpus.

The question now, is not replacing vertex or pixel shaders for gaming mate. After spending a good time of the afternoon reading the CUDAs manual and reference papers, i believe a really big can of whoop ass is already on its way.
Its headed to Intel by the GPU makers. We are talking about relieving the CPU from some functions he hes doing atm.

So CPU performance will be felt even less.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
I googled CUDA and read about it on Wikipedia. It is hard to see from that how it differs from HLSL or extends the graphics pipeline. To do physics, I believe that the GPU would have to be able to perform trig and calculus functions, which are not available today.

GPUs are optimized for linear algebra (vertex and matrix caclulations). The application sends the shader program to the GPU and then streams the geometry of an object to it. The vertex shader operates on the vertices to convert them to homogeneous clip space. The pixel shader determines the color of each pixel.

In other words, the GPU programs operate on one vertex or pixel at a time. They do not have the entire geometry of one object, nor that of many objects, nor a bounding sphere, or anything else that would enable them to perform inter-object calculations, such as those to perform collision detection.

It is up to the application to perform the various physics, animation, and AI operations that set the world matrix for each object (or each bone for animatied objects) prior to sending the matrix(s) to the shader program and then streaming the geometry to the graphics card.

What they are planning to do the future is not very clear, but I think they are trying to do away with the separation of duties that currently exists between the CPU and GPU. Having to move textures and data from the computer memory to the graphic's card memory is costly. A more integrated solution, where they have the CPU and GPU share the main memory would make sense to me. Maybe this is what Intel and AMD are working towards?
 

yipsl

Distinguished
Jul 8, 2006
1,666
0
19,780


IMHO, this will begin to be addressed with Swift. GPU cores on the CPU might be aimed (initially) at the notebook and entry level market, but eventually one or two GPU cores will be matched to 3 or more CPU cores to get a balance.

Hopefully, there will be some leeway to add discrete GPU's such that we'd have CrossfireX or triple SLI added to the mix. If GPU's can process some of the physics down the line, then that balances out the CPU limitation issue.

Of course, I could be wrong. Most people tend to go for monster Nvidia GPU's without a care as to how drivers "optimize" (i.e. Crysis' demo water snafu) to get a few extra fps in popular FPS that give people around 10-20 hours of gameplay. That market seems to want to have to overclock the CPU to match the monster GPU's they buy every 6 months or so.

Still, for the mainstream gamer like myself, buying a future AMD CPU with matching GPU cores is a nice idea. If it can run LOTR Online at least as well as my current 3870x2, then it will suit me fine. Single player CRPG's are getting a bit too dark (first Oblivion, then The Witcher), and neither Crysis nor Gears of War 2 appeal to me.

So, perhaps this is the last high end GPU I'll buy to try to keep up with current games. Especially if the midrange delivers solid performance. Even more so if that midrange GPU is integrated into a 6-8 core CPU.