How Will AMD stay alive?

Page 29 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
A "fair" assessment of a cpu could be seen as how high it can clock at stock volts.
If it goes several ranges higher consistently, then yes, it could be seen as underclocked overall.
Thing is, processes these days are pretty god on both sides, and having to keep within the TDP needs to be reassessed every now n then
 
so can they be able to send all instructions on the gpu instead to the cpu first from system start up till it shutsdown, since gpus have more cores and does 10X faster than a cpu in parallal applications? or there are something that a gpu cant read and have to go throw cpu first then to gpu?
 


Only if you discount reliability. I can envision several failure mechanisms which could result in medium probability of early failures on devices which are overclocked above stock, even if voltage remains at stock.

There's lots of reasons CPU vendors don't guarantee operation at higher than stock speeds. It's popular to assume that the vendors are just being stingy, but there are both quality and reliability considerations. Sure, as enthusiasts we probably don't care so much. We buy every couple of years. But aside from Extreme Editions, CPU vendors are primarily targeting their chips at a mass market of people who expect it to not go dead in X years.
 
It (gpus) simply dont have the memory currently to do the things needed, and supposedly, nVidia is working on this, as well as a few other things.
A cpu will always be needed, but how big and how powerful may be the future questions to ask
 

But its widely known, and this is to my point, that as processes mature, and even new steppings arrive, those same chips are capable of higher stocked clocks at stock voltages, as I wasnt refering to non makers ocing a chip, but in house
 
I guess, to make it a fair assessment, Im sure that previous steppings with more mature processes have allowed for this, but at the same time, AMD and Intel havnt changed it, its just that as it happens, the chances of getting a better bottom rung chip is vastly improved.
Kinda goes into the tri core argument as well
 


I guess I was commenting specifically on the "danger" of considering something "underclocked overall" just because you can easily overclock it at stock. It could well be that when overclocking you are (statistically, of course) reducing the expected lifetime of the device beyond which Intel's Q&R would approve.

It could also be that Intel Q&R are more paranoid than the average third world dictator. But I couldn't comment on whether that's true or not.
 
correct me if im wrong, because this got me fked up.

intels larrabee, amd fusion, nvida cuda, ati stream.

nvida cuda, is sharing the load between both cpu and gpu instead of the cpu alone, to help video editing or rending etc up to 10x or faster than the cpu itself.

ati stream, same as CUDA.

intels larrabee, adding gpu to intels cpu, so that intels cpu will be able to do parallel applications much faster, basicly its like having a gpu and cpu in one chip.

amd fusion same as larrabee.
 
CUDA and ATI stream run specifically coded, highly parallel processes on the GPU instead of the CPU. Larrabee is Intel's attempt at a discrete graphics card. Fusion is AMD's attempt at a GPU module built-into a CPU, as far as I know.
 


Even if they bought VIA they wouldn't get it. Its nontransferable, hence why the GFs stirred up some controversy.

So nVidia would have to get one and pay for it. And since nVidia is a selfrighteous group who doesn't like to work with anyone I doubt Intel would just give it to them.
 
If Nvidia produced an x86 CPU you wouldn't be able to take their benchmarks with a grain of salt ... thats for sure.

Plus I am sure there would be lots of math shorcuts in their ALU.

NVidia .. .close enough is good enough providing the benchies look good.

 


so intel is trying to make their own graphics cards to compete with Nvidia and ATI? what the hell, Intel is trying to be in everything, SSDs. motherboards, cpus, and now GPUS? in a while we might just see a whole platform made out of intel.

and why amd is trying to make GPUS inside their Cpus? dont they have ATI already, and even if they put gpus inside their cpus it will never be as fast as graphics cards, that is at least not in 10 years.
 


Not fair if the stock volts are waaaay over. Heck, if I was AMD or Intel and wanted some wins, I'd set the stock voltage right next to limits.
 


I take shortcuts in math too.
 
http://www.guru3d.com/article/radeon-hd-5850-review-crossfire/20
Again, if this trend continues, all those people spouting cpus doing video transcoding may as well stop.
This is the second highest card available beating a i7 965 @ 3.75Ghz
Soon, the cheap 100+ dollar midarnge cards will be doing it as well, and no need to oc either.
So, maybe even a low end will tackle a 920 easily, and having a 70$ gpu isnt going to kill anyones bank account.
So, again, cpus need gpus for the most used by average Joe things, anything video, where they rule, and thats why LRB, fusion and nVidias solutions are needed.
And, again, is why Intel needs LRB
 
yea exactly right now people who do video transcoding instead of looking at high end cpu they will just get some high end Gpus, which will operate at much much faster speed,which will be very bad for intel, Amd in the other hand if their cpus wont be as usefull in the future they still have ATI, but as for intel it will be dead end, thats why them homo's they have to work a little faster with this Laraabee and get this over with.
 
Do you think this wont change once LRB hits?
The only thing thats different now is cache, approaches can all be done SW wise in many things, so its only a matter of time, both with gpu evolution, and SW writes
 


We already have a whole platform made of AMD.



That is why they purchased ATi in the first place. A single unit good a sequential and parallel processing, with the necessary technology and licenses to process 3D effectively, would be a hugely valuable asset, not to mention a marketing coup. It won't be designed to be better than current discrete graphics cards, just hugely more powerful and efficient than current integrated graphics.
 
how many cores they'll be able to fit in a cpu chip? so far we have seen 6, in a gpu there are 240 if not more now, so how will they be able to fit as many cores in a small chip like a cpu to take advantage of parallelism?
 
Status
Not open for further replies.