How Will AMD stay alive?

Page 23 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
If AMD had more monies earlier, it may have bought ATI even sooner, but that could also have been the EU's findings with Intels influence of the market.
We know Intel had the cash, but did AMD?
I notice youre giving credit to Intel, and tho it sounds possible, I doubt it, as they could have bought ATI, and sooner.
It could still end up bighting Intel for not doing so, and just using the cpu approach, we just simply dont know as yet, and is very possible its the wrong way to go, and using true gpus in with your fusion if you will could end up being the best route, and then creating your own, using what youre familiar with is the easiest way as well, but may not be the best
 


The reason Nvidia is going towards more general processing is because they realize they simply can't just rely on selling high end graphics card to power users in order to survive. Now that Nvidia is almost out of Intel's chipset market, they need to find new revenue. Aside from that, Nvidia realized they also need to provide a platform where consumers can just buy a platform, not just components. Personally I think that attempt is quite futile, but we shall see.

Make no mistakes, using GPU solely as a general processing tool is as smart as using a sports car for grocery shopping. Yes, the GPU architecture does benefit in some (read: some) situations, but certainly not every one of them, or even most of them.

This is why Intel and AMD both invested a lot of money in developing a general purpose GPU, so they can be integrated into their CPU platform. GPGPU is not here to stay, but a hybrid CPU/GPU is. No one is stupid enough to buy a graphics card, just for the sake of doing some processing. Its just not economical.

Therefore I see absolutely no reason why AMD would want to continue down the road of GPGPU with its Radeon 58xx lines, even went as far as to compare a 5870 to an i7 975. Yes, its throughput is very impressive, but without the right software, they're just as good as none.



I agree with you wholeheartedly. I mean, how dare they? How dare they even remotely suggesting that AMD may be delaying THE TRANSITION? I mean, look at Barcelona! It was THE TRANSITION for them in 2006, with its 40% better performance across the board... oh wait, that's not right, is it?

Given how bleak AMD's financial condition now (losing money since 2006, consecutively), and with the 1.5B of senior notes due in 2012, I really have some doubts as to how much AMD can pull off. Although some unconfirmed reports said that Bulldozer will be a game changer, and I would be very impressed to see it being materialized, only time will tell whether Bulldozer indeed lived up to Hector's hype. Saying whether Bulldozer will be delayed or not is just pure speculation.

Oh what? You said Abu Dhabi will bail them out? Remember that ATIC is actually in a joint venture with AMD, not AMD's backer. It just took the fabrication plants off AMD, and jointly invest money in the new Global Foundries. I miss the part where the deal explicitly said that ATIC will give AMD money.

But again, this is AMD! AMD can do anything! I mean, this is the company that claimed that the first silicon of Bulldozer will be made on 45nm process node, and launch in 2009! oh wait.. that's not right again.



That's not what Jack said though.



Hmm... should I trust him, the ultimate, rabid Intel fanboy, or you, the Mod of THGF? :sarcastic:
 
First off, since Intels been working on RT since day 1 with LRB per my links, as well as other links showing LRB as being late, not on time.
Best rumor is that new silicon is just being done, and the bugs are worked out.
As for ATI and RT, dont look there, its not what theyve been doing, and RT isnt ever going to impress, because those that really know, know that RT is a but a pipe dream, and at the very least and very best, therell have to be a compromise, using raster and RT techniques.
RT is NOT what you want to see out of LRB, not for gaming, possibly like my link shows, for movies in slow render, otherwise for realtime RT? Forget it, especially as we migrate to larger screens and higher resolutions, as the games and the cpus havnt caught up to the gpus, and you dont want to use RT on large screens, so its not getting easier, its only going to get harder to do, and isnt anywheres close at this point.
You may ask whomever you want about this, but Id get at least 4 opinions on RT, and then go from there.
 


If AMD had more FABS or invested in more FABS so they could produce more CPUs when they were in high demand especially for the low end OEMs who wanted them then maybe AMD would have had more money to spend on everything. But they don't. They bought ATI instead of building more FABS like the one in New York. Funny how the anti-trust case from New York state kinda dissapeared after AMD decided to go FABless isn't it?

Also, yes I am giving credit to Intel because they decided to take the more difficult route. They could have easily bought ATI or nVidia but decided not to. TBH, it takes stones to try to break into a market against two experienced comptators with a new type of GPU utilizing something completely different from what is currently out.

And normally using whats familiar is the right way. It was proven with AMDs x86-64. But that was only true since we are still grasping on to 32bit. I am sure once 32bit is near obsolete then IA64 might have a bigger place due to its native 64bit arch. But thats to be seen.

As always, I never said LRB will kill the others. We don't know jack squat. We just know it will be something new to challenge ATI/nVidia and hopefully bring GPU innovation to a new level but for some reason a lot of people, you included, don't seem to realize that. As I said, there is never anything wrong with more competition in most markets of PC technology.
 
As for the SIMD and the MIMD approach, both nVidia and LRB are going this direction, tho not sure how the cache is going to be worked out on the G300, but if its what anything Ive been reading, itll do alot of things cpus can do, but 20 to 100 times faster, and I mean alot of things.
The super computing, and scientific markets will be inundated with them. Its already starting to happen, and as gpus become more flexable, and can use a cache, for and or, DP and so on, without huge looping, theyll be killing cpus.
Im talking about not so parallel workloads by the way, not like it is currently, and even currently, theyre just now deving the SW for all this, including W7, compute shader, opencl etc, which will all work on CUDA
Im betting, and so is nVidia with its own future, that CUDA apps appear faster than MT apps do, and since the ones the cpu future, and the other is the gpu expansion and future, we will just have to see, and again, is the reason why both AMD and nVidia are doing this with their gpus, and Intel is trying to make one

@jimmys
AMD may have gotten there sooner if Intel hadnt broke the law also.
It all comes down to timing, and I havnt bothered to look up exactly how bad and when Intel went to HP et al and said cut out AMD completely or no good business for you.
If it was early on, before their fabs were actually running full out, they would obviously doen so even earlier.
If this were the case(this is all on the hypothetical), they may have started on their fabs sooner, and things could be completely different today.
A 1 billion dollar fine is alot of money, and they must have really caused the consumer alot of money losses for it to be so high.
If the consumer was that affected, then even more so was AMD, and like i said, it all comes down to timing.
Im just really tired of people shrugging off what Intel did, and has been found guilty in 3 countries so far across the planet, and how its supposedly has never ever had an effect on AMD, or their plans, or profits or market share, or possible future existence etc
You can overlook it, and say whatever, anyone can, but we simply really dont know, and to shrug it off is comparable to ignoring the fines, and the findings.
It all sounds good for those who are trying to write history, which is usually the winner, but its later on when we find out we gave those bad blankets out, and took the kigs away from their parents and such things, but not right away.
 


I'm not sure how LRB is late when Intel never put a actual date except either late 09 or early 2010. Until there is an actual date put on paper that they pass then its not late.

And think of it this way: ATI doubling its SPs from 800 to 1600 sounds like a pipe dream but its done. I mean you literally double the SPS, increase the memory and core clock and get lower idle and load temps and power draw.

Nothing is ever a "pipe" dream. People back in the day of the 8086 never thought what we have was even possible. My grandfather used computers with chips in them and to this day he never saw what we have coming. Hell in 15 years we went from a Pentium @ 75MHz to quad cores hitting 4GHz on air at 45-32nm and multiplying the perfomance by an insane amount. Pipe dreams exist, but when it comes to technology it has been said that many things will never happen. Like Moore himself said Moores law will fail then intel introduces HK/MG and kept it alive.

As for RT, I think its amazing what can be done but you are right. I doubt we will see a fully RT game anytime soon since rasterization has some things better than RT. But mix both and you can have an even better looking game.

Still graphics don't mean squat but thats what tends to drive most gamers.

OOOOOHHHHHH!!!! Pretty!!!!!
 
Decent RTRT performance is good as long as it can do it alongside rasterisation. We will probably never see 100% ray traced games as ray tracing is far too inefficient for many components of a scene. Rasterisation + ray tracing can mimic a 100% ray traced scene pretty well, while taking a hell of alot less time to render. You just need to use ray tracing for the right job (lighting, shadows, reflections), rather than everything.
 
Just as they are, rays, for those lighting jobs, otherwise yes, far too costly, and just aint gonna happen.
The RT LRB thing was nothing more than show, and besides like my earlier link, itll never be used for games, but Hollowwood will eat it up
 


You mean like intel with Larrabee? Larrabee isn't coming out in 2009, and by the lack of any real progress I doubt we'll see it 2010. Quite possibly never because a flawed idea will only give a flawed product no matter how much time and billions is thrown at it.
 
I think JDJ is on to something with Hollywood. Anything to cut down render times is good and currently rendering is done in expensive (and power-hungry) render farms, which could be alot cheaper with Larrabee if it can deliver good render speeds. When render times are measured in months instead of hours, any decrease is good.

As for RT and the consumer, meh.
 
Dreamworks signed up with larrabee years ago, can only guess that was based on theory instead of any working prototype.

Intel aren't creating it for hollywood that's for sure, not nearly enough cash in that.
 
17 X the power of a i7 .... with a 5870.

Just a pity that there isn't much software out there to make that kind of use of the power of a graphics card in general.

Now if the OS could directly make use of the GPU ... well c'mon ... that is where we should be heading.

Isn't it?
 


First of all, "Jeepie Jeepie You" sounds kinda ridiculous if you say it in public :). So there's one big problem right there! :kaola:

When was the last time you actually see loads of applications taking advantage of GPGPU? The problem with GPGPU is that the cores are simple, too simple. That is why it can only do simple tasks that are programed to run in massive parallels, like rendering. Other than that, they fail miserably. When was the last time you see a GPGPU running a normal daily applications? You don't, because most applications are still being written sequentially, not parallel.

100% agree. If jeepie-jeepie-you was such a giant performance boost for everything, we'd be seeing MS and other software devs porting a ton of stuff to it already. Instead, it remains a niche market - the 'next big thing' for some years now, but never quite arriving. To me, it's just Jen-Hsun koolaid, although for some apps like F@H it does seem quite impressive.
 
Well gpgpu is still very new. If you consider intel had a couple of goes at hyper-threading before it was even considered useful, gpgpu is probably further along the road than what that was on the P4 years ago.
 


I seriously doubt this will be as easy or as quick as you imply. Unless Microsoft sticks GPGPU support into the OS, and the various software devs also support it, AMD will be stuck with doing the porting by itself (and of course doesn't have the $$ to get into that area).

OTOH, seeing as how most PCs don't even have a discrete GPU, and considering the marketshare that "dragon' is likely to get (<1%?), why would anybody except AMD want to bother with it? You might wanna read up on the history of the separate FPU chip, the 8087 and see how it fared marketshare-wise. Short answer - not well. Even after Intel included the FPU on-die with the 486 IIRC, many apps did not take advantage of it when they could have used floating point instead of integer. It wasn't until Intel popularized MMX and later SSE that apps would commonly take advantage of specialized hardware & instructions using same.

Also, take a look at how well AMD did with 3DNow, their 'exclusive' accelerator instruction set. Why do you think AMD is including Intel's AVX in BD? In other words, until AMD has the marketshare clout to dictate standards, they have to meekly comply with those set by the industry leader! 😀

 


Hmm, nice find, JDJ. My thought is that native speech recognition, to replace the keyboard & maybe mouse, would be a big driver for specialized hardware, even GPGPU :). Being tied to a keyboard is the biggest single drawback to a true PDA that you could use anywhere, anytime. And lack of a HUD maybe the 2nd :). And just like it took a bit of getting used to people jabbering apparently to themselves when bluetooth earpieces became common for cellphones, maybe by the time 2015 rolls around people barking orders & such to themselves on elevators won't seem peculiar 😀.
 


When your i11 is getting turned over from a Phenom X8 with a radeon 7890 doing the donkey work in a lot of apps, that will show up in the benchmarks. People will see some big results in favour of the Phenom, and ask themselves what exactly the i11 is doing that makes it cost more. Increased marketshare will be a product of that.

Basically put, anything at all that requires video is hugely in favour of AMD, even now. When the Phenom starts offloading tasks to the Radeon, and the core i11 starts offloading tasks to the GMA...lol...well the end result is pretty apparent, even now.
 


IMO, Larrabee is a sorta hybrid between an ordinary GPU and that 80-core experimental CPU that Intel demoed a couple years ago. So I dunno how you can break down the R&D expenses between basic CPU, GPU and Larrabee. IOW Larrabee is not a completely separate effort on Intel's part.

And until BD and whatever Intel processor that includes Larrabee on-die (Sandy Bridge maybe?) can be compared, we won't know whether a combination of a complex general purpose CPU + highly specialized GPU is better than a complex CPU + bunch of simple CPUs with specialized vector engines. It could be that AMD has to spend more billions re-engineering some BD successor to copy Intel :).

 


Last count, TC had about half a dozen forum wives, including Baron the Domin-Matrix ! 😀

I don't think there's anybody on Tom's that TC wouldn't marry... 😗
 


IIRC Dreamworks signed up to use Nehalem's for their render farms, giving AMD Opterons the boot-up-the-arse. Seeing as how Nehalem ES samples were out & performing well at the time. Larrabee is still experimental, not in engineering yet AFAIK.

Please try to get the occasional fact right, will ya?? 😀
 


Statement with personal sentiment? :sarcastic: What happened to the "wait and we'll find out" attitude?



But again, they will be mostly limited to scientific or high performance computing, where the codes are mostly simple and parallel. GPGPU is not going take over regular consumer space, because it just doesn't make sense.
 


Aahh, jenny-jenny-jenny! Always with the "just wait, our future AMD stuff is gonna whomp Intel right & proper!" :).

Tell ya what - for a nice change of pace, let's argue the past instead, m'kay?? :) So I'll go first: Intel's 4004 whomped the turds outta any CPU AMD had out in 1971!! 😀
 
PS - here's a diagram of the new Bulldozer architecture, esp. for Jennyh 😀

312px4004archsvg.png
 
Status
Not open for further replies.