Is Intel pulling a Fermi?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We all saw earlier nVidia showing a card that wasnt there, even have pictures of its CEO holding up a fake.
Earlier this week, Intel showed off what some believed to be LRB. Some even wrote about it as such
http://www.theregister.co.uk/2009/11/17/sc09_rattner_keynote/
While others took the time to see the others pulling a "mistake"
http://www.computerworld.com/s/article/9140949/Intel_to_unveil_energy_efficient_many_core_research_chip?taxonomyId=1
Its not hard to see someone was pulling something or other
 
Well, if we hadnt talked this far jimmy, I woudnt have heard about its possible co-introw with gulftown.
When I first brought in LRB, everyone said it wasnt important, while putting me down at the same time.
Im correcting those thoughts, and bringing in the importance of it.
We dont have alot of info on it, but the other thread where it was brought up, there were links, and showed some perf as well, and Im using that tid bit of knowledge compared to what we have now, and also other opinions on its arch/approach.
Ive also had to recorrect statements about RT and LRB, as thats smoke and mirrors, and not what LRB means to us on DT, but as an altrternative for those who make movies etc using RT, where I do see nice returns, and my excitement for LRB in those terms.
So, when I say Im not so certain LRB will be a good gamer, and then also say RT isnt for gaming anytime soon, Im not putting LRB down as much as correcting wrong thoughts, while also, within reason, explain why LRB has so much to do to be a good gamer competitively.
It is important, you know it now, but it wasnt long ago even you were saying it was for games, but thats NOT the future main usage here, as we see windows 8 using 3D as example.
Intels IGPs wont cut it for that, they need LRB, or they wont even be able to do the web, same thing for aero, as early on, Intels IGP couldnt even do that.
If this sounds harsh, its only whats happened, what needs to happen, and whats not going to happen, as in RT anytime soon, and LRBs part in it for DT
Lets face it, tech is moving on, Intels IGPs simply wont be good enough, even for business, and having a netbook that does 3D vs one that doesnt, which do you think is going to sell? Yes, LRB is that important, and no, not just for gaming, but, LRB needs to be a good gamer as well, or we will only see it in its smaller makeup, on die, for the reasons I just said, as a gpgpu version alone doesnt have a large enough market to stand on its own
 

spud

Distinguished
Feb 17, 2001
3,406
0
20,780

Why is it a mod is attempting to start a flame war?
Word, Playa.
 
Dont think youll quiet facts.
Dont assume what Ive said isnt true, and is only here for flamebaiting, as you know, LRB isnt to create raytracing for games, its to use raster.
You also know that Intel not having LRB is disastrous.
You also know theres lil info, and what we do have shows it diminishes after 32 cores.
So, whos causing a flaming comment here?
As to my comments to fazers, we have that repoire, and it wasnt addressed to anyone else
 
Its nice your accuse me of flamebaiting, but you also havnt even given a reply to the OP.
Theres plenty of evidence LRB is, has been, and who knows how long, its a paper dragon.
If thats flamebait, then prove it isnt

The problem here is, people still believe that LRB is primarily for raytracing on DT.
People also think that LRB isnt anything more than either a gfx card, or its gpgpu functionality isnt important.
The detractors on this thread, BD etc werent from me, and my comments on them were to put it in perspective, and didnt flame anyone, nor insult anyone, and were appreciated by the one who brought BD into the conversation.
The last posts I made were to i9s prepreviews, and was in response to jimmy, now, does this mean that we can now expect something soon? or not?
Again, youll notice the lack of flamebaiting going on.
If you have any other comments or knowledge to offer, wed all be happy to hear it
 
http://techresearch.intel.com/UserFiles/en-us/File/terascale/Mayo_IEEE_VIS2009_FINAL.PDF
"Results Summary: Our parallel implementation of ray-casting delivers
close to 5.8x performance improvement on quad-core Nehalem
over an optimized scalar baseline version running on a single core
Harpertown. This enables us to render a large 750x750x1000 dataset
in 2.5 seconds. In comparison, our optimized Nvidia GTX280 implementation
achieves from 5x to 8x speed-up over the scalar baseline.
In addition, we show, via detailed performance simulation, that
a 16-core Intel Larrabee [26] delivers around 10x speed-up over single
core Harpertown, which is on average 1.5x higher performance
than a GTX280 at half the flops. At higher core count, performance
is dominated by the overhead of data transfer, so we developed a lossless
SIMD-friendly compression algorithm that allows 32-core Intel
Larrabee to achieve a 24x speed-up over the scalar baseline."

"the Fermi GPU is running 6.2 times faster, whether you look at interactions per second, or frames per second. Nvidia says that Fermi can run double precision up to 8x faster."

http://www.vizworld.com/2009/11/nvidias-fermi-sc09/

This is a discussion, not a flamewar, and tho its been diverted at times, the topic has remained.
I havnt lead away from the topic, merely commented on those diversions.
I just went back to some facts we do know about LRB and Fermi, as was alluded to earlier in this thread.
Theres been diversions of supposedly not having any information on the topic, which is part of the topic itself, and no, I dont think we'll find out why the quiet surrounds LRB, the biggest thing coming out of Intel within 3 months time, unless this stops, and we do get real numbers, and the waiting ends. Until then, its paper, with hints here and there, and whether that polaris chip is to act like LRB or not is something a thread like this can be speculated upon, as well as the lil info we have on LRB vs Fermi, and any other hints of release
 

roofus

Distinguished
Jul 4, 2008
1,392
0
19,290


I would respectfully disagree there. I think one or both Fermi and/or LRB would release even if slower than the ATI offering. "Performance crown" is a short lived title in graphics. You cannot pull the plug entirely if you don't come in first and it isn't like the 5XXX series is worlds away from what was already on the market. Intel has enough capitol to come out of the gate in first place if they are serious about it. They can buy the best engineers that money can buy. I am not sold on LRB or convinced it will be worth a crap for gaming or even compete with midrange ATI/NV offerings. Intel can buy their way into a situation of their liking though. Whatever brings me the most choices and pushes the rest to bring us better products is what I want. My prediction is LRB will not rear its head until q3 2010 and while it will be better than anything they currently offer, it will be nowhere near the hype.
 

jennyh

Splendid
We heard plenty about evergreen months before release. There is a difference here however, and with Fermi, and it's this -

We already know that ATI and Nvidia can make exceptional graphics. Evergreen is a simple enough 'upgrade' on rv770 with some bells and whistles. Why do you think it's out months before the competition?

Fermi is built from scratch, like rv770 was. The difference here is Nvidia are having real issues with it whereas rv770 was perfection from start to end. On the other hand, it's double shaders etc...we know it should perform a bit like a 5870, possibly slightly faster. Nvidia will release something eventually, and it will at least by close to ATI on performance.

Larrabee? We know intel cant do graphics to save themselves. We know their drivers are rubbish, we know that this is a completely new movement away from the established means of creating graphics.

We know that the head guy on the project lost his job, we know there are many rumblings of discontent, we've seen the remarkably unimpressive demo's, we know that intel cross licenced new ATI tech in the antitrust deal, we know that intel are ploughing money into yet another company (Imagination Technologies) with graphics tech.

You choose to believe what is the most likely scenario here.

 


Once again, a promise to hand out samples next year != a demonstration of working silicon, or even some preliminary test results on working silicon.

In contrast, Intel did demonstrate working silicon with Larrabee.

So, BD is a year more late than Larrabee, and still no working silicon.
 


Jay, even the title you chose for this thread - "Intel pulling a Fermi" - is inappropriate unless you can provide some links that show Intel did fake the Larrabee demo using some mockup and then passing it off as working silicon, which is apparently what NV did with Fermi.

And yes, I know I've done the same thing in the past :), but then I'm a known "fanboy" and thus OK for me to stoop to that level :D.

A mod, however, is held to (or maybe hung from :D) a higher standard... After all, we regularly see Jenny spout off about AMD engineers being superior, when common sense would tell us otherwise (higher pay = higher ability).

That is, when she's not spouting off about microwave ovens instead :D.
 
My comparisons arent about anyone holding up a card, never have been.
They are about saying the gpu is dead, diffusing comments about LRBs supposed maybe just a competitive part talk.
They are about how important LRB is, as opposed to , its just a gpu
Its about Intel showing look alikes, nothing concrete
Its about how important Fermi is to nVidia as well, yet, weve seen nothing concrete, and seen only look alikes.
Its about Fermi talking up gpgpu, and hardly mentioning gaming
Its about the comparisons show LRB doing gpgpu, and little about gaming, or, gaming in raytracing, which simply wont be done for years, period.
Theres plenty of parallels here (pun intended), and its an observation on how both have handled their new "puppies"
Theres more as well, but this should be enough for comparisons for most to see that theyre there

Also, going by your reasoning, then shouldnt I also pull my Gulftown thread, as this too may upset someone?
I'll also remind you, I was the first one to post Anands preview on Nehalem as well.
Its not always postive news for all sides here, no matter what, and if theres better proofs, you bet I want to see them, thatd end this thread now, and Id be reading that info right along with the rest of us, probable wanting more
 
Not strictly on topic here, but none the less
http://www.driverheaven.net/news.php?newsid=345#ixzz0Xhun3tPA
http://hardocp.com/news/2009/11/23/more_fermi_rumors_good
Fermi was mentioned in last qtr 09, then 1st qtr 09
This marraige of cpu/gpu is taking longer than claimed, and its extremely important.
If this thread were just about slamming LRB or whatever, it wouldnt matter, but this is the future of computing, its that important.
Im not about to slam LRB when it arrives, and will remind a few that can handle it maybe, if its not the bees knees, but I want this progress to move forwards, no goofin, no playin, none of that.
 


Then how else were we supposed to interpret your original post in this thread?

We all saw earlier nVidia showing a card that wasnt there, even have pictures of its CEO holding up a fake.
Earlier this week, Intel showed off what some believed to be LRB. Some even wrote about it as such
http://www.theregister.co.uk/2009/ [...] r_keynote/
While others took the time to see the others pulling a "mistake"
http://www.computerworld.com/s/art [...] xonomyId=1
Its not hard to see someone was pulling something or other

The point being, it's not so much what you say, it's how you are saying it, with all the innuendo and outright accusations of Intel misleading the public with their demo, etc.

Personally I'd believe Intel would be very careful about being honest and above-board at this particular point in time :)...
 
And so my questions as to the polaris exhibit and LRBs perf.
Its in LRBs arena, as the 80 core isnt on any roadmaps, or mentions Ive heard of, so just what was Intel showing here?
I think the connection is obvious, similar to nVidias exhibit as well.
Thus my post, and enough to confuse a writer for a site as well.
I also cleared up that misunderstanding as well.
Tell me the point behind their demo?
I think its a forwards towards LRB, just as nVidias showing was.
Do you have info regarding this 80 core chip, thatll compete directly against LRB?
 
What nVidia did was ridiculous, make no mistake about this, and I respect Intel too much to do that, but look at the content, Intels claims "gpu is dead" etc.
Content tells me, its about whats being claimed, when, and how.
The how is similar sans the holding the card up, it was all hype and wood screws.
The 80 core chip is keeping Intels interests up, not for the 80 core chip, but for LRB.
The nVidia showing was the same, neither are here, both delayed upon your own interpretation, and the lack of info is deafening.
Maybe Fermis possibly being late may bring more info as to its release/perf/existence and we wont need a 80 core experimental chip showing what LRB can do
 
So lets look at it this way.....

You think just because Intel developed something that was not on a roadmap but seems to have potential in a big market means that LRB is not really there? It means nothing like that.

Polaris is just something they found would be useful in a market as well as make them more money. It may be based on Terascale as LRB has links to it but just because they demo something new does not mean LRB is non existent.

If anything, nVidia does the dumbest things. Remember the imgae quality degradation to increase performance with the GeForce 5/6 series? They are just known to do stupid things.

Intel has shown LRB in a true demo and even though jenny may think its sub par, she obviously does not understand that ray tracing itself is not something that can be done easily. In fact ray tracing itself uses more power than rasterization which is why LRB will support both, because gaming is going to be a hybrid of both.

While ray tracing can do amazing things for reflections and shininess rasterization is better for other things. This is what we will see in the near future. ATI will go the same way and I am sure they will do ray tracing very well. They have had tesselation support since the HD2K, well tesselation was to be in DX10 but nVidia wouldn't agree to everything so MS pulled it from DX10.

Hmmm.. if you look at it nVidia does a lot of stupid things. Delays awesome features for DX, holds up fake cards and lowers image quality for better performance.

Here are my thoughts:

-LRB will be out in Q1 2010.

-It will be decent. Not amazing, its new. Nothing new and out of the box is amazing at first.

-It will improve over time and bring more competition to the market.

-It will be a betetr product for Intels IGPs and SoCs.

Speaking of IGPs and SoCs, Intel would be going the right direction with this. It will make its way to their IGPs and as well as their SoCs. If it is like Terascale it will be a very low power GPU that provides the performance that JDJ is looking for. Possibly with 32nm they can use it for Atom and create even better Netbooks that can play Crysis.
 

I saw that as more to being akin to a car manufacture putting a prototype of next years model on a revolving pedestal at a car show, they usually don't work and are there to just to show prospective customers what the finished product will be like but that doesn't mean that there are no working models being thrashed around secret test tracks somewhere away from prying eyes.
 

C4PSL0CK

Distinguished
Nov 23, 2009
11
0
18,510
"Why hasn't Intel shown any concrete game demo?"

Ohh Ohh PICK ME! It makes sense to demonstrate products to the correct target. Why exactly would you want to show off games at IDF, a HPC event? The last big gaming event was TGS'09, Larrabee B0 samples were far from ready by then. B0 Silicon tapped out the 15th of August. It was a close call they were even able to actually show something at IDF. The next big gaming event is CES2010. So I suggest you drop that argument untill then.

"Larrabee has failed, only 1TFLOPS! HD5870 can do 2.3GFLOPS!!! FERMI IS 8X FASTER!!!!"

Sometimes I wonder wether people post before they think, then forget by the time they post again. That 2.3 TFLOPS is theoretically possible yes, but usually they miss that target by a huge margin.

The 8800GTX managed to score 185 GFLOPS (Single Precision), it however was supposed to be able to achieve 518.4 GFLOPS. The Tesla C1060 (GTX280) promised 933 GFLOPS but only managed to score 370 GFLOPS.

http://www.idre.ucla.edu/events/2009/gpu-workshop/documents/David_Tesla.pdf

Same for ATI. HD4870 promised 1.2TFLOPS. Yet only manages to score 200 GFLOPS using OpenCL, 540 GFLOPS using IL/Brook+, and some guy saying he managed to get 880GFLOPS but no confirmation that this method is usuable in general. What is worse for ATI cards is to achieve higher FLOP performance developers have to revert to proprietary programming languages such as Brook+ and IL. (Hint: Developers don't like that)

http://forums.amd.com/devforum/textthread.cfm?catid=390&threadid=120413&
http://forums.amd.com/forum/messageview.cfm?catid=328&threadid=105221
http://cerberus.fileburst.net/showthread.php?t=54842

Fermi is indeed 8x faster, 8x faster in double precision then its predecessor. It used to be a whooping 78GFLOPS however even at 624 GLFOPS it is still much lower then a HD5870. The only reason it'll be actually used instead of a HD5xxx is because of IEEE and C++ compatible. Yet all of this doesn't matter a single bit for games. Games benefit exactly 0% from Double Precision and IEEE. Infact, they would actually slow it down.

Suddenly that real world 1TFLOPS Larrabee managed to set doesn't sound that bad anymore, huh?

In the end that TFLOPS tends to mean relatively nothing when it comes to games. HD4870X2 had 2.4TFLOPS, while the GTX295 was packing a whooping 1.7TFLOPS yet was showing the HD4870X2 every corner of the room (except the price-corner).

"Whatever, it has 80 cores and they had to overclock it, LOL"

Question! How exactly is it possible to stuff 80 cores on 650mm square die when Intel said that the step to 32nm would allow a 48-core variant? How exactly did they suddenly manage to get 80 in there ... with 45nm? It surely wasn't more chips because it was clearly stated that this was a SINGLE chip solution. Maybe because that lousy journalist said so? Maybe someone should tell him Polaris =! Larrabee.

Aren't you wondering why not too many websites have covered this particular benchmark? Anandtech, Tomshardware, Semi-Accurate, BS^N, Fudzilla? I'm pretty sure the're all under NDA. Read some more forums and you'll notice there are some peeps that know more but can't say. (Hint: Beyond3D)

There's quite some info that isn't clear. Was this a 32-core running at 1GZ or a 16-core running at 2GZ? How much did they overclock it, 15-25%? I'm quite curious to say the least, we'll have to wait this one out.

"Dude, they can't beat ATI/nVidia at their own game, pass whatever you're smoking browski"

So they can't beat them because of their immense experience. Yet if they don't they automaticly fail. Rrrright. All Intel has to do is bring decent performance for a reasonable price to the table. The flexibility alone will draw alot of developers to it.

"It fails because it's Intel!"

Yup, keep telling yourself that. Intel has been on a streak ever since Conroe. Core-family is whooping AMDs ass in performance. The're rocking the SSD market. I can't wait for Light Peak (Can you say 10-100Gbit/s?). Next-gen Itanium coming up (Nehalem-EP/EX)! The're also working on a flexible cGPU as you might know (LRB or something). They also said they managed to crack CPU-GPU-shared-RAM-issue.

Frankly I'm just more interested in what Intel has to offer then AMD. AMD has done nothing but blame Intel for its failures while delivering inferior (but very amazing price/performance I must admit) products. I'm not going to say Intel didn't bribe here and there but AMD went from AMD64-age to where they are now. Nontheless, I'm looking forward to Bulldozer but they are nowhere near as innovating as Intel.

The're still quite some things that we don't know about Larrabee. Will they skip 45nm in favour of 32? At what Ghz are they going to run it? How overclockable are you going to be (looks at Core i7/Gulftown *feints*)? Are they planning to release a Chip-variant aswell? And with Cell development brought to a stop is Sony considering Larrabee? Everything in time I guess.

PS I suggest you look up "Larrabee: A Many-Core x86 Architecture for Visual Computing". There's more information in it then meets the eye.
 

unclefester

Distinguished
Nov 8, 2008
685
0
19,010
50 points for opening an account, 4 for posting. This points system is skewed anyways. By my calculations you should have at least 19,000 points Jimmy, which means your an Addict. LOL

Welcome to tom's C4PSLOCK
 

jennyh

Splendid
Yes and all that stuff about ATI and Nvidia's flop count...any reason to believe intel aren't making the most of their claims too?

64w, 1 teraflop yadda yadda...if it was that good it would be powering supercomputers right?

Or do intel not exaggerate? Or do intel only run benches on their own software? Jeeeeez who'd have thought?

Where is this 'real world' 1 tflop from larrabee btw?