How Will AMD stay alive?

Page 22 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Hmmm
So, w3hen have I said they werent for gpu purposes that youd ask?
I could care less what a cpu does for gfx, because they do very little, and thats why we have gpus.
As I said, they failed (cpus) and thats why we have gpus now, get it?
The flexability Im refering to is as I said. Theyre made of of thousands of cores, ot 4 or 6 or 8 or 48, but thousands.
They can be neutered as Id said again, as you seem to not get what Im saying, and thus, can be set for market segments accordingly, again, as I said.
Their core speeds havnt maxed out, again as i said, they scale very well, their IPC has who knows how much capability yet, again, as Ive said.
So, which part didnt you get?
If LRB comes out with say 48 cores, and its next iteration gas say 32 cores, its pretty much cut n dried as to how itll be, whereas a gpu can be altered in any number of ways to just edge that part out, thus the flexability.
We see this all the time, tweak the core, the shaders, the memory, cut the shader counts, use a different bus etc etc etc
So, yes, gpus are MUCH more flexable than cpus when it comes to rendering games, unless somehow LRB is a complete exit from past traditional cpu make up, and being early on, I dont think thats something Intel will be great at, they simply dont have the experience, or have had the silicon
 

So, when has HKMG ever been stated as being exactly out at a certain date? this is the first Ive heard of it.
The 32nm was to be out qtr 1 or so, but no HKMG, so not sure what theyre saying here, maybe theyre a bunch of dufass FUD spreaders?
 
I don't see how LRB can even be compared without a decent view. Sure we can look at the SPs. Wish it meant a lot. Well in GPGPU it does but not gaming.

Overall I am just waiting to see what LRB brings to push nVidia and ATI with. If they can get them in the right area where it hurts, it will make them start to do more like HD4K -> HD5K "progress" or better possibly.

And how can that hurt us? People seem to want Intels LRB to fail. yet they forget that another GPU in the market would be great. It would change the game up a lot which will benefit US.

I don't see how its a bad thing even IF it doesn't blow everything away if it helps push GPU progress.
 
Yea, well he n fugger also was supposed to have that cherry picked 975 at 7Ghz also, bragged that up to no end as well. Never beat P2, and it was Intels best shot too.
Everything they say thats slow on gpus, like low or no cache, looping latencies etc, having to hide the latency etc, gets mentioned by him and Intel over and over, yet they never address their issues, which is a SW driven, non FF part, where latency abounds, and its this SW working with drivers where Intels never done well.
I know theyve hired outside, simply because what they had was woefully inadequate, and couldnt even keep decent igp drivers and functions, so they made the right move, spent more money etc.
My whole point is this.
People go on and on about how GPUs arent important. They still are in thwe last century regarding cpus, and their old functions, and dont see the value of parallel computing, where gpus shine like nothing else.
Intel uses x86 cpus to simulate a gpu as bet as possible, but why?
Do they really think its that important?
Everyone and their dog knows things are moving towards mobile, so why the big push for a gpu?
People arent being honest with themselves.
The cpu is dying in its expansion, and can do little to change what weve been seeing for awhile now.
Sure, MT will be nice, but its very limiting. Some things will always be serial, and some parts of almost any app has those serial threads in them.
Like they say, a chains only as strong as its weakest link, and those serial threads are just that.
So, everyone in denial about CUDA and gpgpu solutions, especially Intel fans living in a cpu haven, its time to consider that even tho AMD spent alot of money for ATI, and they may think its stupid, they really need to ask themselves these questions, like why is Intel spending billions also just for a mobile gpu?
So, next time someone gets the idea of knocking gpgpu/cuda/whatever solutions, while they pet their Intel rig, laughing, they need to start paying attention.
Intel isnt planning on selling and making a ton of money on just gpus?
No, theyve spent billions folks. Just like AMD did, and theyre copying them.
No one woke a sleeping giant, and of course theres those thatll say Intel thought of it first, while at the same time claim the sleeping giant rouge.
Intel copied AMD, it wasnt cuda that got Intel going, it wasnt Intel that bought ATI, it wasnt Intel insight to do that, it was AMDs, and now, theyre the ones spending their billions, and it has alot more to do with it than simply a gpu to runs games.
So Im saying, its time for some to wake up, Intel saw the writing on the wall, and its time for others to admit these things, and quit being obtuse.
And if someone here thinks Im refering to them, then where was the sleeping giant quote from? Thats right, another site.
Theyre doing Intels work for them, making them look good, while all theyre doing is seeing that AMD has the right idea, and got in before they did.
Now, whether or not AMD actually is able to achieve this first is another point altogether, since yes, Intel is outspending AMD probably 20 to 1, so they better get LRB out, and it better be all that and cake too
 
It would be great to see just how much money intel has thrown at larrabee, so we can compare it to the 'overpriced' aquisition ATI.

I guess we won't know until larrabee fails, as it likely will, and the commentators are labelling it as the $x billion flop.
 


The problem was the July roadmap showing one timeframe, and the 'adjusted' roadmap showing a delay. Now as to whether this is a GF problem with HKMG or due to AMD having trouble with BD, or both, is not clear. IIRC there were some issues with the gate-first approach that had to be resolved (such as the gate melting during the 1000 degree C annealing, for the most effective hafnium blend), which is why Intel went with the gate last approach. I'd guess only IBM/GF know for sure.

EETimes is a professional publication for electrical engineers, not given to dufass FUD spreading such as FUDzilla and some other "news" sites 😀.
 
I haven't seen much info about how much money Intel has spent to date on Larrabee - why would it be in the 'billions'? They already have a ton of experience with CPUs which is what Larrabee is, with some vector engines added. And the experience they get with Larrabee will cross-fertilize CPU development. Anyway, I think Intel plans on putting Larrabee on either the Ivy Bridge or Haswell CPU die.

And all the 'serial threads' stuff applies even more so to GPGPU. GPUs are optimized for one task - rendering graphics as fast as possible, so I'd agree with the statement that they are less flexible than a general-purpose CPU. And even though modern GPUs include branch prediction, I doubt it is nearly as efficient as that on CPUs.
 



:sarcastic: :sarcastic:

With all of the benefits that you just said, it still comes back to one question: why hasn't GPGPU gained any traction at all? When was the last time you actually see loads of applications taking advantage of GPGPU? The problem with GPGPU is that the cores are simple, too simple. That is why it can only do simple tasks that are programed to run in massive parallels, like rendering. Other than that, they fail miserably. When was the last time you see a GPGPU running a normal daily applications? You don't, because most applications are still being written sequentially, not parallel.



That's as useful as saying, "Yes, a sports car is MUCH more flexible when it comes to running at extreme speeds". Is it more flexible in general than a normal car?

We've been hearing a lot of about CUDA, about FireGL, but no one can really reap the benefits. We've seen Nvidia going on and on about how CUDA can easily outperform CPU in encoding and decoding, but most, if not all of the media programs still work on CPUs. So how exactly is a GPU more flexible?



So I see... JDJ being himself again. :sarcastic: :sarcastic:
 
Cuda and stream encode video's at a ridiculously fast rate compared to cpu's. This is still a fairly new technology, and what you aren't getting is that there is no reason why other new technologies taking advantage of gpu's will become available.

It won't be long before you have say, a dragon platform or the future equivalent offloading multiple tasks too and from the cpu/gpu almost seamlessly. No software required, it just works.

This is one reason why graphics are so important, and why intel are spending millions/billions on catching up. People with discrete graphics have had an incredibly powerful processor sitting doing nothing much in their pcs for a long time, but that is changing. If nvidia hadn't been so greedy with Cuda it would already be pretty dominant and standard, but that could change too.
 
Anyway, in the near future you would expect to see Phenoms and Radeon's working together almost as a single unit.

Notice there is no room for intel cpu's or nvidia graphics in this, its purely about having the platform.

All those tasks that nehalems do better than phenoms? We'll see just how much better they are when the Phenom is combined with a Radeon.

All the talk about bulldozer, and AMD already have the most important parts. The combined weight of ATI graphics and AMD processors far, far outweighs intel's combined cpu and 'graphics'. That is why larrabee has to be good, and really good.
 
Tell me one thing I posted in that long post thats wrong? You dont get it do you?
Why do you think nVidia is becoming more cpu like? Why do you think compute shaders are here? Why do you think WDDM1.1 is here and does what it does?
I think some people really dont see LRB for what it really is.
Yes, itll render games, and probably be decent at it, but wont shake the world.
But Intels intentions arent there for only that.
This isnt about whos right, its beyond that. Think, why would Intel invest billions of dollars for a gpu only? It doesnt make sense, theres better ways for your money to be spent, right?
Like I said, AMD has been slammed for doing what Intel is now doing, and yes, its cost them, but again, thats not the point, and if people cant truly see the reasons for ATIs acquisition and LRB, and the untold billions of dollars spent for them, besides making gpus, in a field/market thats struggling, theres not alot I can say besides just wait n see
 


This is something new, I never thought AMD would be so "meaningful".
Do you have any top 10 of these things? LOL. =)
 
Why do people think BD is delayed? This is THE transition, the reason Intels spent its own billions.
Tell me the last time Intels gone out and hired hundreds of new people, spent over 3 years dev time, and plans on keeping all those people? And guess what, they dont come cheap, nor does all those man hours thatre adding up each day, besides the fab costs.
If people want to believe so its Intel can say they too can play Crysis, then theres not alot I can say to convince them otherwise
 



I was so bored that after the news of a 32nm delay. I had to retaliate some how.
 



If anyone feels they can throw an axe at AMDs expense - carry on.

Im not a fanboy JennyH but i am a fan of reliability over anything else to be honest. Although Prescott was dreadfull. AMD had a dreadfull chipset its mitts and nvidia was really the only viable ( no pun intended ) which was below par. God did that southbridge get hot even without sli.
 
http://en.wikipedia.org/wiki/Larrabee_(GPU)

"A June 2007 PC Watch article suggests that the first Larrabee chips will feature 32 x86 processor cores and come out in late 2009, fabricated on a 45 nanometer process. Chips with a few defective cores due to yield issues will be sold as a 24-core version. Later in 2010 Larrabee will be shrunk for a 32 nanometer fabrication process which will enable a 48 core version."

 
I just dont understand people thinking LRB is on time, isnt costing a ton of resources and is for games only.
Its late, the bug fixes are being done now, the first spin sucked bad.

As for Intels i7 being so dominant in multi card setups, and making a huge difference with the new cards
http://translate.google.se/translate?u=http%3A%2F%2Fmikrodatorn.idg.se%2F2.1030%2F1.253738%2Fmikrodatorn-hardtestar-radeon-hd5870&sl=sv&tl=en&hl=sv&ie=UTF-8
Here, and elsewheres, its not showing it, and whatever they did have, theyve lost in driver improvemnents.
So, yes, AMD can survive. If Intel depended on LRB, theyd be doing a AMD financially right now, because it is late, it hasnt performed right at all, but, one things for sure, they wont release it with bugs, like Phenom and AMD did, at least, I hope not.
So if Intel can do the pooch with all its resources, give AMD a break
 


There is a huge difference in buying a company and over paying for them (AMD) and creating your own unique GPU design. In fact its harder to create one than to just buy a company although most companies do just buy what they want since as I said, its easier.

But Intel is not copying AMD. In fact we have no idea how long Intel has been working on larrabee and its possible they have had the specs for it way before they announced it. They might have had to wait for 45nm in order to implement it in a successful way. We don't know.

But still, buying and creating are not the same.



Well the "overpriced" aspect comes from a economical standpoint. If you look at the assets and actual net worth of ATI at the time AMD sealed the deal you can easily see that it was not worth nearly $5 billion dollars. And no I don't '"love" nVidia. Its a complete opposite really. I don't like their drivers or their GPUs. I am just like TC. I look at more than just the tech side of it because the tech side only affects it so much. The business side is the biggest impact, as in the money they make.

And the only thing I can say is wait. If larrabee does well then it does well. if it sucks, it sucks. But there is always the next time around. Do you think ATI beat nVidia off the bat? Hell no. It took till their Radeon 9 series for that. the 9700Pro to be exact.

Did AMD just begin and plaster Intel? Nope. It took them over ten years. But it can happen. Intel can beat nVidia and ATI. Its even possible that LRB will do it off the bat. Because if SPs matterd then the 4870 would plaster any nVidia card, even a dual GPU from them.

Just wait and see what happens.
 


I agree. In the non CPU market anyways. I think more than two major CPU competators would start to make it more complicated.

I'm not LRB will destroy anyone, although I wish it would so it makes the 5870X2s drop to below $200 so I can get one, just that without any knowledge on it we cannot judge it.
 
Status
Not open for further replies.