Is Intel pulling a Fermi?

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We all saw earlier nVidia showing a card that wasnt there, even have pictures of its CEO holding up a fake.
Earlier this week, Intel showed off what some believed to be LRB. Some even wrote about it as such
http://www.theregister.co.uk/2009/11/17/sc09_rattner_keynote/
While others took the time to see the others pulling a "mistake"
http://www.computerworld.com/s/article/9140949/Intel_to_unveil_energy_efficient_many_core_research_chip?taxonomyId=1
Its not hard to see someone was pulling something or other
 

jennyh

Splendid
It's been shown that the ATI can reach close to it's theoretical max. The problem was never the hardware - it was the software.

The Chinese figured that out recently too and that's why they are using 5000+ 4870's in their HPC.


Don't get me wrong though - If larrabee is making 1 tf at 150w then it's doing better than it was. A lot better. There are still far too many unknowns to make a prediction, but even if you take it at best case?

16 cores at 2ghz overclocked getting 1 tflop. Doubling either would ofc get to 2 tflops under ideal situations.

Double the cores, get to 3ghz and we're almost at 3 teraflops sp - so just better than the 5870 assuming perfect scaling. Forgive my cynicism but one of these gpu's actually exists while the other barely even exists on paper.

Now I believe this larrabee we are seeing is already at 32nm - we are seeing the 'best case' from intel here - why wouldn't they show us their very best? You can be sure this SGEMM demo was optimised fully too. In DP? Larrabee might be onto something eventually but in SP it's already at least a generation behind (on a superior nano process).
 
This is something to consider as well. Why would LRB release on the 45nm node in late qtr2 2010?
LRB can be arguably the most exspensive project Intels ever had, unless Intel is waiting for refresh to go up against the 28nm HKMG cards coming?
I know Intels never been serious about gfx before, and always had their gfx solutions on old nodes, but LRB too?
 

jennyh

Splendid
There is no way larrabee will debut at 45nm. Intel are long since past making chips on 45nm, and larrabee with all the cash that has been thrown at it will have taken up a large chunk of 32nm R&D.

Anything you read about larrabee at 45nm is old news, there just isn't any point. This Larrabee we are seeing making 1 teraflop here is one of the better 32nm test chip - and i would almost guarantee that.
 

jennyh

Splendid
Making use of old tech is fine for low end stuff but not when you are trying to make an immediate impression in a market that you don't really exist in up till now.

Intel need to impress people. The name only carries them so far, especially when we are talking about extremely smart people who actually look at the numbers before making a purchase.

If you assume intel have been on 32nm production for at least a year, and if you assume that the old Larrabee people were kicked out/replaced by new - and the stuff you read about intel ploughing all their best people and more $$$$'s into Larrabee -

You can pretty much be certain that they have some Larrabee 32nm working test chips at the very least . This is intels one and only advantage right now, and one they must make the most out of. To me, both Larrabee and Fermi are at the exact same stage - both are oversized, overhungry and underperforming and that is why neither are available now.

As an AMD fangirl, it's easy for me to think that is the case however.
 

C4PSL0CK

Distinguished
Nov 23, 2009
11
0
18,510


Barely excists on paper? Since when is paper able to run a benchmark? Yet when there actually is a product that fits that description your standards seem to shift radically. Obviously you know I'm refering to Bulldozer. I'm not saying BD will be crap, I'm looking forward too it aswell, but you seem to have double standard.

Somehow I find it difficult to be not impressed by a 16-core Larrabee running at 2Ghz doing 1TFLOPS. Especially when there will be 32- and 48-core versions. Also, I find it weird you're comparing 16-cores to a HD5870 or HD5970. Single Chip vs Dual Chip, Low/Mid-End Card vs High-End Card?

How about pricing? HD4870 obviously couldn't match the GTX285 but it was the pricing that made it so successfull. Bang for bucks, performance/price ratio. What makes you think that Intel, which is trying to breach into the GPU market and know it'll have to go against 2 titans with alot more experience, will price their GPU-products nVidia-style?

As for jaydee, why do you keep saying Larrabee and Fermi fail because there's a refresh coming up in 7 months? Let's not forget the R9xxx is a new architecture, it was supposed to be the HD5xxx but was pushed back. How often have we NOT seen any delays on new architectures? Bulldozer, Fermi, possibly Larrabee, the list goes on and on.

Recognize potential. Maintain balance between optimism and pessimism. Retain ability to think before posting. These are guidelines for a bright future in ForumLand.
 
If you remember my many times saying, it only gets tougher for anything late. Refresh makes this tougher. The 280 LRB comparison was made how long agp? And now? Its twice as good as a 285? Its still not here, and neither is fermi, tho its been said Fermi will be twice as good as the 285 in games alone, and many times better in gpgpu usage.
7 months from now, at least ATI will have cards out thatre most likely 15-20% higher, and thats if they dont have something else hiding in the wings.
My whole point with LRB is, its refresh rate has to keep up, every 9 months or so, give or take, then a whole new approach after 2 gens, or like the tic tock.
Its those in between refreshes, plus I also mentioned the HKMG issue, which will dramatically improve both ATI and nVidia solutions, and in the mean time, if you read my latest link, it appears llano is coming much sooner than thought, which also means, it puts pressure on LRB with haswell, as it wont be here for quite awhile, meanwhile, the competition will have several refrshes/improvements made and selling.
For all the hooplah, LRB needs to be out, and soon, as no one just stands by in the gfx market, which Ive pointed out as moving faster and further than what weve seen in cpus.
I liken it to the early days of cpus, where we did see huge improvements one model to the next, and the gpus have a few gens to mature IMHLO before we see a more static increase in perf as weve seen on cpus.
Thats what LRB is facing, and it isnt here, only demos, which no one really knows if its Polaris or LRB, tho Im sure its LRB, as to the youtube link, as Intel said it was.
The numbers shown arent that impressive if youve read the excellent article over at Tech Report by Rys on Fermi, which shows some major cache changes, its ECC abilities, and my other links showing Fermi may be doing tiling as well as LRB, so again, no biggy.
It appears even without too much effort, ATI who doesnt even have a dog in this race with its gfx cards is producing larger numbers, and that on older tech.
My link shows AMDs real direction, and the dates of release moved up, with APU usage just around the corner.
LRB needs to be here, and now, not later, not 5 months from now, but today.
Just because it carries the Intel name here means nothing in old established markets such as gaming for LRB to make impact, its all about perf
And, as to the new gpgpu area, its a free for all, no name recognition to help here either, again it comes down to perf, with others already getting the jump on LRB in some markets, like nVidia and their CUDA solutions, and llano, with their APU solution.
And it grows at the gpu pace as far as perf goes, not the cpu pace
As to the 9 series, its easy to see the delays fit the BD roadmap by delaying it, as theyll need to rebalance their whole line up as, this will effect their gpu sales.
Also, Im sure implementing their particular gpgpu usage is a more leisurely pace in discrete, as they have the entire platform, and obviously want that out first
 

yannifb

Distinguished
Jun 25, 2009
1,106
2
19,310
Just want to point somethin out to JDJ- Avira picked up some viruses on that link you posted, just a little warning.

Also does anyone take into account that Bulldozers transistors are going to be different (not just in size)? According to Global Foundries this new gate first approach and high K metal gates are going to be something special (its on their website if you want to check it out).

But whats so special about 'gate first' and high k metal gates.
 

jennyh

Splendid
Barely excists on paper? Since when is paper able to run a benchmark? Yet when there actually is a product that fits that description your standards seem to shift radically. Obviously you know I'm refering to Bulldozer. I'm not saying BD will be crap, I'm looking forward too it aswell, but you seem to have double standard.

Ok what I meant was, we aren't seeing any Larrabee papers with closed architecture. It's all well and good having a few chips that can do wonderful things, but that doesn't make it viable.

We need to see some cold hard facts about Larrabee, and soon. As a further example, Evergreen was rumoured to have closed specs over 2 years ago. What we are seeing about Larrabee right now is a mish-mash of different ideas with conflicting reports on progress. That isn't good no matter how you look at it.

Somehow I find it difficult to be not impressed by a 16-core Larrabee running at 2Ghz doing 1TFLOPS. Especially when there will be 32- and 48-core versions. Also, I find it weird you're comparing 16-cores to a HD5870 or HD5970. Single Chip vs Dual Chip, Low/Mid-End Card vs High-End Card?

As I mentioned before, 1 teraflop is 18 months old now. The 4850 was capable of it - it just didnt have the software to make use of it - a failing on ATI's part tbh. Otherwise, the hardware is sound. If you want to talk about cores, then the 4870 has 800 of them. We can be here all night arguing the definition of a core however, so I'd rather just assume that 'cores' are meaningless - whether 16 larrabee or 1600 evergreen.

How about pricing? HD4870 obviously couldn't match the GTX285 but it was the pricing that made it so successfull. Bang for bucks, performance/price ratio. What makes you think that Intel, which is trying to breach into the GPU market and know it'll have to go against 2 titans with alot more experience, will price their GPU-products nVidia-style?

Can you point to any place in the market where intel are operating on price/performance? Even in the low end, intel are selling cpu's for more than AMD's superior ones. Intel have never operated on price performance and never will. Larrabee will be no different - if it's a total flop hardware wise they will still recoup some cash selling it at ludicrous prices to people who lap it up without question.
 

jennyh

Splendid
The keyword here is 'envelope'.

There are a lot of people expecting miraculous things from Larrabee - but intel cannot break the laws of physics. They - like AMD and Nvidia are operating under certain conditions that can be pushed hard but not broken.

ATI are pushing the envelope as hard as it can currently go. That is why they managed to get a 300w part out in the 5790 - even though they had to deliberatly underclock it in order to do so.

Nvidia cannot magically drum up a better 300w part - not *that* much better. If they can, it will be a tiny bit better. Intel are the same. Larrabee could end up being the best yet - but it will never be so much better as to obsolete the rest. Intel are not about to magically produce some wonder gpu that follows the regular restrictions because if it was possible then both Nvidia and ATI would have been doing it long before now.

Forget about your wonder chip from any company because they are all operating close to the maximum already anyway.
 
Its just a different approach as to how Intel does it. I cant remember if there were claims of better doping/tempsleakage or whatever, tho I do know, gate first is harder to do, and Intel I believe didnt think it was worth the effort at their time to decide as to how theyd implement HKMG.
It could also be that, Intel also knew thered be no one else using HKMG for awhile, and took the easier, less costly approach
 

yannifb

Distinguished
Jun 25, 2009
1,106
2
19,310


So do you think the new approaches will increase IPC and in turn performance?
 
For both gpu and cpu, IPC wont change til arch changes are made, but, what it will do, is allow for higher clocks and or lower temps/power usage.
In gfx, you know itll be the clocks, for cpus, which play in a different market, where they want things to shut down, and not peddle to the metal at all times, its a lil of both
 

yannifb

Distinguished
Jun 25, 2009
1,106
2
19,310


Well higher clocks and lower temps (in addition to transistor shrink to 32nm) will be a great combination. The Bulldozer architecture looks completely new compared to prior archs. I have high expectations for it.
 


Don't underestimate nVidia. During the past few years they have easily pushed ATI. Mainly because they utilize a seperate shader clock. SO while they may not have a core clock of near 1GHz, their sharder are at 1500MHz while ATIs set at whatever the core clock is.

That in turn gives them better performance for near the same power at a much lower SP count than ATI.

Sad to say but ATI is pushing their limits pretty far and need breathing room if they want to survive.

As for a refresh, if its a true refreash then it will be just a update like a stepping update. I don't see how it will magically change the performance unless they plan a whole new GPU. Still hard to make a profit when you change stuff so fast.

Oh and BTW, Intel isn't following any restrictions that ATI/nVidia do because their approach is completely different. They are not going the same route therefore the restrictions change. In fact more than likely they have a bit looser belt than ATI/nV do because of it.



Probably. I would expect Intel to go that way first. Its a better way to get their hands on the tech first and get experience with it making their next step even better.

GF going straight for the harder approach is not the best way. Complications come in and you probably wont have the highest possible optimized process. Nut I think Intel wont have the same troubles since they have been using HKMG for a few years now.
 

jennyh

Splendid
I wouldn't say they had a looser belt jimmy, based on what we actually know.

They are aiming for 150w, fermi is gonna be ~225w (evergreen 188w). However you look at it, that gives a huge lead to fermi in terms of 'per chip'. (note this assumes larrabee is actually 45nm - if it's 32nm and 150w then this 1 tflop SP figure seems low as I mentioned before).

Also, in that demo they went from 417 gflops with 8 cores to 825 gflops with 16. If the scaling didnt drop off until 40 cores or so that should have been 834 gflops. It's a small drop but it is there.

Any advantage intel has is on process and that is going to last a year at most. If Larrabee is to be successful at all, it needs to be out before Global Foundries or TSMC get their 28nm HKMG ready. Rumour has it glofo's is already at 50%+ yield and I wouldn't be suprised if ATI were shrinking evergreen at this very minute.
 
Reread those old posts, and youll find it was hyped by people who said it was needed, which it obviously wasnt, as we currently see the fastest cpu ever on non HKMG process, and people said AMD had to have it. Nuff said
Funny how things change eh?
At the time, I believed those same people, who took offense to my early talks about it, and before P2 was released, as I had no "proof" either way, but they didnt like to be wrong, even when they were.
I havnt mentioned this at all, nor rubbed it in when I did actually have "proof", and I find it interesting you bringing it up now.
I went with what AMD claimed, theyd not need it til 32nm, do you disagree with me and AMD?
If it wasnt for a few, my message would havnt had to be pounded in at the time, and if you still think I never said HKMG never had decent returns, go ahead and put it up there, cause Ive never said that, but was and still obviously looking to its implementation on both nVidia and ATI and AMD as well.
Why so bitter now?
It wasnt HKMG I pointed out, it was the misguided hype about it, and thats all.
Look, Ive shown how tuff its going to be for LRB, and true, we dont know if LRB will hit it out of the park, Im just pointing out how hard itll actually be, and that puts Intel in a unfamiliar position, and I believe its that, and that alone why Ive been continuosly discredited here.
Its just the way it is is all, nothing more, and if this is personal to some, then my only reasoning for that is either they own stock or work for Intel.
Im like everyone else here, trying to get the facts, nothing more. And I mean, nothing more.
Presenting what I know, and what weve seen, and how it relates to nVidias current presentation of Fermi, there are many a parallel, and that too is facts, even if some somehow manage to take it personally.
LRB is Intels version of fusion, itll do fine in gpgpu scenarios, and itll be interesting to see how it fares on chip with AMDs solutions, and thats exciting to me
But, when it comes to gaming, Im just showing how tough its going to be, and it seems almost too much for some to grasp Intel may have a huge struggle on their hands
 

BadTrip

Distinguished
Mar 9, 2006
1,699
0
19,810
I am not bitter at all. It just seems that you flip flop more than a politician.

flipflop.jpg


mccain.jpg