Is Intel pulling a Fermi?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We all saw earlier nVidia showing a card that wasnt there, even have pictures of its CEO holding up a fake.
Earlier this week, Intel showed off what some believed to be LRB. Some even wrote about it as such
http://www.theregister.co.uk/2009/11/17/sc09_rattner_keynote/
While others took the time to see the others pulling a "mistake"
http://www.computerworld.com/s/article/9140949/Intel_to_unveil_energy_efficient_many_core_research_chip?taxonomyId=1
Its not hard to see someone was pulling something or other
 
Whats really funny here is, Ive heard about both being in the wild, and both rumors come from reputable sources, such as Phil Taylor.
So, again just like Fermi.
The biggest difference I can see is, if Fermi and LRB fail, as the market theyre trying to improve upon never gels, at least LRB will be later implemented in Intels design, which still leaves Fermi out in the cold, as so too will AMD have their fusion solution
 

jennyh

Splendid
I'm just trying to point out something that most people might be missing.

*If* Larrabee/Fermi is slower than ATI's current lineup, neither intel or Nvidia will release. To be seen to be behind the 'budget' builder? That cannot be allowed to happen.

Is it just a coincidence that neither Larrabee or Fermi is released yet, or is it because they are both behind ATI?

Maybe they are both too busy worrying about each other to notice ATI grabbing more and more supercomputer business due to the FACT that ATI has ~5 in the list already?
 

yomamafor1

Distinguished
Jun 17, 2007
2,462
1
19,790
Not surprised at all. OP has been touting the absolute demise of Larabee for months now, and we don't even have any performance figures on them. On the other hand, he claims GPGPU apparently will be more important than CPU for general consumers in the future, despite the fact that most industry insiders agree that GPGPU is mostly catered to supercomputers.

For Jenny...
beating-a-dead-horse.gif
 

BadTrip

Distinguished
Mar 9, 2006
1,699
0
19,810



FACT: Bulldozer has severely missed it's target window.

FACT: LRB is not due yet.

demotivational-11-600x400giveup.jpg
 
Im trying to use whats known.
I also disagree with th gpgpu market as you see it, and want it to take off and do well, and it doesnt change what Ive said, until it does, therell be no ROI for LRB unless it games well, and it doesnt look like a gamer.
Same can be said somewhat of Fermi, tho we do know how the gpu approaches gaming.
Besides, if youd followed what Ive said, the implementation of LRB combined with cpu, allowing for a apu scenario will work, doesnt need a new market, and ATI currently holds a few top HPCs on the planet, using the old gen.
Now, if thats the demise of LRB, its because I simply dont think its a gamer, the ROI will be extremely low, at least until the other 2, if both happen, scenarios turn out to be.
But, some here canr see Intel fail at anything they do because they also think Intel was innocent, which goes against what everyone else and their brother thinks.
Some are that blind, and Im saying, in the gaming market, I think LRB will be a dud, its only savings grace will be its scalibility, but not for what most people think, RT.
Ive shown how poorly it does after 32 cores, so RT or even RC, its not going to happen any time soon.
So, yes, I can see LRB fail as a gpu, and itll need gpgpu and later, as the accelerator in a apu with cpu combo.
But go ahead, drink the koolaid, deny what I say as being something else, because it speaks ill of your precious Intel, which AMD nVidia or whomever I could give a rip about, and the added need for support to any of these entities doesnt reside within me, unlike others
 


Why wouldn't they surpass AMD in a try or two? After all, they accomplished that easily enough when they went from P4 to C2. Oh wait - that's outdated tech! :D

At least you were right about one prediction so far - even Intel is now predicting brain chip implants, according to the front page here on Tom's. :)

Still no stories on magic sizzling microwave ovens however :D.
 


Sorry but as much as I love ATI, even their current GPUs fall behind nVidias old gen in some games. Its not like ATI is completely dominating them. In fact if you look at the competition tree:

HD2K <-> G80
HD3K <-> G90
HD4K <-> G200
HD5K <-> G300

While thats how it is in competition, the G200 series still can give the HD5K series a rund for its money and its old tech. And as much as I think nVidia is just full of crap, don't underestimate them Or Intel.

People always know not to underestimate AMD or ATI but they always underestimate the big companies thinking they can't pull something to compete. If G300 blows the HD5K out of the watter, I am fine with that. It means the HD5K will be cheap as hell and I can move to DX11 sooner.



You should look at your own link. He clearly states 2009/2010 timeframe. Its always been that way. they were trying for 2009 but if not they will go for 2010. Its never been a 100% gurantee for it.

BTW, most NDAs are pretty tight. Those who break it risk quite a bit since they sign the NDA. Some are willing to, others not. Maybe Intel has only given it out to those who don't leak info....



As do I.

but heres what I want to know, why in the living hell are we still basing the gaming performance (the main purpose of the GPUs we use) on its GPGPU performance?

ATIs HD4K series blows nVidias G200 out of the water in terms of TFLOPs yet it doesn't in games. So if LRB is set for at least 1TFLOP does that mean it will suck in games? No.

I guess I should start doing the same in CPUs. Look Intels Core i7 975 gets nearly 20GB/s of memory bandwidth, doublt old C2Qs and Phenoms thus it must be the best!!!!!!
 
Truth be told, when it was first announced, it was 09.
Since then, everyone (writers) have said its been pushed back from their understanding.
Its there, look.
Not my fault, misunderstanding or anything.
Theres been the speculation of Gelsinger and his leaving as well..
Now, this is off topic, but jennyh does have a point, and sounding like AMD fans about Barcy, saying this and that, when its the writers whove said different, thats ok, Ive seen it all before heheh
Facts are, theyve got 4 months, then its fail for release date.
Im looking forwards to its release, not for gpu usage, as Ive already said what I expect there, but for gpgpu usage, and to see how and if this market becomes more than it currently is.
Same for Fermi, tho, I'll be more interested in some ways, because with Fermi, we may see more of the blending of cpu/gpu, where in LRB, itll only be about connections, which we have in some ways already.
The Fermi approach may show some clues as to AMDs future approach, and what gpu can do against a cpu with high thruput ability vs a multicpu and cpu with high thruput, which weve seen since the IMC and the doubble chezboogies, 2 differing approaches, but still the same thing in the end, it being same/same.
The so called QPI with IGP speedup and subsequent "advantages" are nothing compared to what AMD are attempting, and thats exciting for me
Leaving it all cpu is not only boring, but limiting as well
 

jennyh

Splendid
It's the blind faith of the intel fanboys that makes me shudder tbh. On the basis of absolutely nothing worth talking about, or seen in demo's or hell even seen in leaked figures - Larrabee is supposedly going to be beating ATI and Nvidia at the game they have been doing for years longer.

Intel can fail, and they are failing with Larrabee. How obvious does it have to be? They are using an 80 core test chip that has cooling issues for demo purposes when they should be demoing Larrabee. If Larrabee was even close to the figures the test chip can do, they would be demoing it instead.

This could be intels entire future at stake. When that Chinese supercomputer entered the top 500 #5, powered by 5000 ATI 4870's...a lot of others sat up and took note. Intel needs this to work or a huge chunk of the supercomputer business is going to be lost to both ATI and Nvidia. Fermi is late but Nvidia will get something decent out eventually. I'm not convinced intel will.
 

jennyh

Splendid
Oh come on. If this 80-core wonder chip has such a low TDP and was capable of 1 teraflop almost 3 years ago, where the hell is Larrabee?

Feb 2007 - http://www.computerworld.com/s/article/9011079/Intel_tests_chip_design_with_80_core_processor

Running at 3.16 GHz, the new chip achieves 1.01TFLOPS of computation -- an efficiency of 16GFLOPS per watt. It can run even faster, but loses efficiency at higher speeds, performing at 1.63TFLOPS at 5.1 GHz and 1.81TFLOPS at 5.7 GHz.

November 2009 -
On the SGEMM single precision, dense matrix multiply test, Rattner showed Larrabee running at a peak of 417 gigaflops with half of its cores activated (presumably the 80-core processor the company was showing off last year); and with all of the cores turned on, it was able to hit 805 gigaflops. As the keynote was winding down, Rattner told the techies to overclock it, and was able to push a single Larrabee chip up to just over 1 teraflops, which is the design goal for the initial Larrabee co-processors.

3 years to go backwards? Makes Bulldozer look good. :p

Something isn't adding up the way intel would like us to believe.
 


Terascale != Larrabee...

Larrabee is a GPU that has 2 graphics vector units per CPU core for rendering, whereas Terascale was merely a many-core CPU experiment. Two different things, and each of them novel as well - geez, howzabout dat?? Intel didn't copy AMD for once :D

As for Bull-Snoozer(TM) looking good, I don't see any tests or demoes whatsoever for this magical CPU that AMD was flogging on their vaporware roadmaps some 4 years ago, before they put it out to pasture and then figured they might as well try milking it again :D.

Intel has been on a roll for the last 3.5 years now - AMD has only crawled outta the bucket into being semi-competitive in the last year. I'd place my bets on Intel's racehorses continuing to win, place & show, not some bull from AMD :D.
 
Sounds lioke a IMC issue to me heheh

Like I said, it comes down to approach, as Intels had the HW for some time, most of it.
Its the SW implementation thats holding them back, whereas its the opposite for Fermi.
Weve seen both comapnaies sucessfully deliver nice HW, and nVidia delivers nice SW, whereas Intels struggled here,, especially for gfx.Id say its pointing towards SW issues
 

jennyh

Splendid
To me it's hardware. Intel didn't realise the scaling would drop off so badly after 40 cores. That leaves them with the option of increasing the core speed, until they hit the ghz barrier again.

Sooner or later it's going to be stuck under the law of diminishing returns. Imo, that's already happened and Larrabee was never good enough compared to what ATI and Nvidia could do.

So intel get to 32nm first, make a 40 core Larrabee running at 3ghz each core. That is as good as it gets, and if it isn't already good enough to beat ATI and Nvidia, it never will be.

Imo Larrabee is a dead project and intel will try to save face by claiming they made numerous breakthroughs while developing it.
 
Well, theres the real problems.
Intels touting its "just wait, itll get even better" when everything points towards bad scaling.
If theyre at 3Ghz and still meeting the TDP requirements, there again, the cpu brick wall, and again is why Im saying we need better cpus.
So, more cores becomes a limitation, core speeds become the other, and still, theres the emulation problem, which no one knows how well this is going to play out in games.
Understand, for gpgpu uses, I think LRB will be fine, and Im extremely interested in this aspect of it.
But, for gaming, LRB has to go the extra mile, as gaming res se, isnt really directly a x86 type of usage, in this instance. So, itll have to emulate against whats already been established and has legs/experience.
So, again, if it fails at gaming, it fails all around.
GPUs are reaching a brick wall as well, tho, this may change too.BW has always been the problem, as gpus dont need solutions for multicore to work, theyve existed since day 1, so it comes down to, can it get communication and thruput out quicly enough.
nVidia is stuck concerning the cpu/gpu communication with the PCI connection, again limiting it to whatever PCI limitations currently exist.

Intel and AMD have no such limitations, as they have options for cpu/gpu communications, which we wont truly see til AMDs bobcat for instance shows up.
Other gpu limitations are GDDR speeds, but this applies accross the board to an extent, and David Kanter has had a few good things to say about it, tho, there are other approaches/HW possibly available.
The newer GDDR5 coming, which is denser than anything weve seen, plus runs at 6Ghz quad pumped may move a few things in gaming in a different direction, meaning much higher textures than weve become accustomed to.
Whats this mean to gpus and LRB?
LRB is limited to its internal programming, and seeing as Intel is trying to go as green as possible, this scenario just doesnt play well doing gfx, and is why we see such high power usage in gpus.

I find it ironic Intel props LRB in this way, knowing that the more intensive a game becomes, the more likely LRB will be completely strapped, with no power savings available for it.
This may or may not point towards LRB as being made for mediocrity for gaming, or Intel isnt being fulling honest.

If gaming isnt a win for LRB, then itll struggle for existence, as the gpgpu market simply wont be enough for its expected ROI, especially if we see these walls coming soon to it

PS If the upcoming changeover for gpus using HKMG, and finding ways for higher clocks, Id just like to point out, imagine a 5970 running at even 1.6 on its core
 

ElMoIsEviL

Distinguished

The only person relying on blind faith here is you Jenny. There is no evidence that Larrabee is a failure or that it doesn't exist. Do you want to know on what basis some are stating that Larrabee could beat ATi and nVIDIA?

Computational performance. Nobody said anything about Graphics (meaning games). We're talking about Computational performance. When we quote things like GFLOPs and TFLOPs we're talking about computational performance.

Larrabee is an x86 based architecture, therefore unlike Fermi and RV870, it contains full cores. ATi call their full cores SIMD Engines (to qualify as a full core, the core itself must be able to fetch it's own instructions). Take RV870 for instance, it contains 20 SIMD Engines while Fermi contains 16 Full Cores.

It makes for a far more efficient design in terms of threaded applications but it does come at a cost.. full cores are significantly larger.

Other benefits include full compatibility with the major programming languages (no workarounds needed and no need to rely on the host processor (CPU)). So on a theoretical standpoint, Larrabee is a superior computational device only time will tell (when it is released) as to how well it performs.



Last time I checked a Tera-Scale processor is not a Graphics Processing Unit. Your entire post fails. It's like saying: "That tree has been capable of giving us Apples for 3 years now.. WHERE ARE THE ORANGES"! A retarded statement if I ever saw one. (HINT: An Apple Tree will not bare oranges).



Who the hell said anything about scaling dropping off? You are making this *** up as you go and it is irritating. Keep it up and you'll be right up there with the BaronMatrix's or Sharikou's of the world.
 


What everything actually points to is the fact that nobody knows anything concrete about Larrabee. As a result, all of the people who think it will fail are pulling baseless accusations out of their backsides, and it achieves absolutely nothing.

It isn't late until it is actually late - if there's still no news after the second quarter of 2010 or so, then it is truly late. Until then, nobody really has any grounds to say much of anything, unless they work for Intel (and they would probably be risking their job to do so).
 
Dont have the link handy, but LRB doing cloth scaled fine beyond 32 cores, but doing rigid bodies, which is really what were looking for in gfx (unlike nVidias Physx which is only cloth, fog etc), since its mainly used, scaling does drop off beyond 32 cores..
My guess is the way its compiled, and having better compilers wont help, its the core makeup, not compilers. Gpus suffer from the same thing, as doubling shaders doesnt mean doubling perf, but shaders have yet to be topped out, as the core speeds havnt topped out as yet.
Its a combination, with TDP being the villain, holding both back. Theres other limitations to both solutions, but these are the most glaring, as theres really nothing that can be done with them, unless theres a totally new approach, which we may see in Fermi, tho I doubt it.
Including HKMG in gpus will be a huge boon, but no soup for LRB, as it comes in with this advantage.
Process will no longer matter here, so Intels familiar lead goes away as well.
Its a huge uphill climb for LRB, at least for gaming, but gpgpu may be another way to its success, tho, again, I stress the importance of gaming, at least for us, otherwise, itll cost too much money to ever see it in our rigs until the cpu/gpu marraige occurs.
The HKMG means alot to gpus, and I cant express enough just how important it is, as itll amount to a new process shrink, where we see on average 40-60% increases, so, the next iteration of gpus may see huge improvements, obstacles I dont believe LRB can overcome
To explain why LRB may not come to DT, its like this.
While nVidia can justify its 1800 to 3200$ price for Fermi, its gpus wont have that cost.
Since LRB is so flexable, either Intel sells them at a fixed price point, or someones sleeping.
If they plan on gpgpu sales with the competitive gpu price point, then, the gpu sales will have to carry it, not gpgpu sales.
If they use the higher gpgpu pricing, it will be too costly for average Joe for his rig.
Im probably wrong thinking it being itrowed this way, but if LRB is as programmable as they say, they simply have no choice but to be successful as a gpu first