Is Intel pulling a Fermi?

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We all saw earlier nVidia showing a card that wasnt there, even have pictures of its CEO holding up a fake.
Earlier this week, Intel showed off what some believed to be LRB. Some even wrote about it as such
http://www.theregister.co.uk/2009/11/17/sc09_rattner_keynote/
While others took the time to see the others pulling a "mistake"
http://www.computerworld.com/s/article/9140949/Intel_to_unveil_energy_efficient_many_core_research_chip?taxonomyId=1
Its not hard to see someone was pulling something or other
 

yannifb

Distinguished
Jun 25, 2009
1,106
2
19,310


Wait a minute... so your saying you are from the future and are a 100% sure of how Larabee will perform? Weird...
 


No Hyperthreading was originally released on the Pentium 4s back in 2003. Then after the Pentium D up to C2Q Intel didn't use it. They refined it, changed it around and renamed it SMT for Core i7.

AMD is going to have a version of SMT but its called CMT I believe. It works a bit differently. But the original was HT, SMT is in Core i7 and CMT will be with AMD.



No thats not what he is saying.

You need to read more. JDJ has been prety adamant that Larrabee will fail yet you don't think he can see the future. No one is 100% sure. I am pretty sure Intel knows more than people think they do. Just because they don't produce IGPs to perform better than discrete IGPs (sorry but anything by ATI/nV are not IGPs, they are GPUs on a NB) doesn't mean they are complete idiots.
 
What Im waiting for are stupid people confusing LRBs decent showing in gpgpu with its mediocre gfx. Thats when the fun begins.
Its here, on record, as it was then as well. Some still have high hopes, Ive given up, as theres been nothing but delays explaining LRB for gfx, no one can even show me anything at all, and what does exist doesnt apply and isnt impressive
As in the title of this thread implies, it is more ironic that we know more about BD than we do about LRB, and yet some cant stay away from the koolaid, tho, what we do know about LRB isnt compelling whatsoever
 

yomamafor1

Distinguished
Jun 17, 2007
2,462
1
19,790


The only thing I'm 100% sure about is that its too early to tell. We know more about the Bulldozer architecture than Larabee, yet our Johnny mod here knows Larabee will fail horribly, while his crystal ball doesn't show him the success or failure of Bulldozer?



Show me that Larabee is mediocre as a GPU. Oh what? You can't?

How do you define mediocre? By definition, the 3870s are mediocre as well, in comparison to the 9800s. Would you consider that as a failed product? By definition, the 4870s are mediocre as well, in comparison to the GF200s. Would you consider that as a failed product?



Back in the day we also didn't know much about Conroe until 6 months before its launch. Did it turn out to be bad? Nope. We didn't know much about Nehalem until 3~4 months before its launch. Did it turn out to be bad? No. Intel tends to guard its new technology very close to its chest, unlike AMD. However, most of the time those so called "thought to be dead, thought to fail" technology always ends up impressing people.
 
The 3870 was a great card, coming from where it started, but was crap compared to its competition.
Maybe we'll see the same with LRB? Coming from nowheres, it does great, but put it side bt side with its competition, and it fails.
I cant say this will happen for sure, but everything Ive seen, every hint Ive gotten is either a dead end, or nothing really positive, and thats over and over again.
If it were only my observations, Id be more positive, but much like my BD links, what Ive read on LRB isnt that positive, so just passing it along, as what Ive read is taking what lil is known, and removing the PR hype and go from there, so, you can blame me for being negative, but Im not alone
 

yomamafor1

Distinguished
Jun 17, 2007
2,462
1
19,790
You said the same thing about i5's overclockability... everything you've seen pointed to it being unable to overclock, due to the PCI-E generator sitting on the die. Several months later, when i5 actually launched, it ended up being more overclockable than the i7.

Anybody with decent knowledge of the graphics card would say that both 3870 and 4870 cards are great cards. Yes, both 3870 and 4870 couldn't compete head to head with its direct counterpart from Nvidia, but its price made it a very attractive option. The same could be said with Larabee.

Again, time will tell. Given your success of speculation in the past, I think I'll stick with the real news.
 
Good thing times change and people learn.
I learned long ago, even those with the inside track on Intel can be quite wrong, and Ive already admitted I was wrong to listen to certain people.
This is better sourcing here tho, much better. The source I was listening to is an engineer whod done work for Intel, and at the time, it seemed like a logical POV, but again, Ive learned.
I also later had a certain thread going for i5 vs i7, as we all remember.
I agree, time will tell, but theres certain things Ive pointed out here, like LRB being pointed towards latter 2nd qtr, not first, its very window for impact, as we see SoC with warious approaches impacting the overall gfx market, and the need for LRB to come in at the top, as the lower ends will be eliminated due to the differing uses of these other solutions of SoC etc, which limits LRBs impact,time and perf position.
That goes with exactly what Im saying, LRB needs to come in strong,early and cheap, and then later, it has to address the usage of HKMG on those higher end solutions of gfx cards, while we see it being pushed back, with no info, and mostly hinting on its gpgpu usage, which Ive been saying all along it should do fine, and again, Im looking forwards to its ability here, but also, knowing Intel will finally also have a decent low end solution, unlike now with their woeful IGPs, which is also a positive.
Ive given LRB/Intel credit where its due, Ive tried to use the best sources/info I can find, and this is where I believe we currently stand.
Ive learned to have better sourcing, outside of Intel this time, again, much like Fermi, it just isnt reliable coming from the home team.
 

yomamafor1

Distinguished
Jun 17, 2007
2,462
1
19,790
I didn't know by confusing Larabee and Tera-scale, and linking The Register (which is the same as the Inq) are considered "using better, believable sources". Perhaps you need to redefine to me what "better, believable sources" are.

And I still don't understand why Larabee has to be "early, strong, and cheap" in order to succeed. Will Larabee succeed if it achieved all three? A definite yes. Will it absolutely fail if it cannot achieve any of them? No. Like I pointed out earlier, the 3870 was not strong, and it was late, yet it succeeded. The 4870 was not strong, yet it again succeeded. The point is Larabee needs to show it does have potential down the road for more powerful and cheaper variant.

The only I would agree with you on is that the first Larabee rendition will likely to be a GPGPU specialized in HPC market. It has loads of potential in the HPC market, and the very possibility of getting a shot to be at the top of TOP 500 list (despite what the AMDiot JennyH claimed). But doesn't mean it has no potential in the graphics market.

Will Larabee succeed? Its too early to tell. Will Larabee fail? Its too early to tell. Again, only time will tell. By speculating on it using fragmented, and mostly unreliable news is about as sensible as you speculated on i5's overclockability earlier in the year. And you thought you've changed.
 
At the time of its release, the 4870 was 10+% weaker than the top card, and half its price.
Thats more of a marketing mistake by nVidia, and a surprise by ATI, which made it a success.
The 3870 was a severe alteration of the 2900, which sported a expensive 512 bus, you also still had the rops not changed etc, combined with the DX10 debacle, which was also drasically changed etc etc.
The 3870 was everything the 2900 wasnt, save for a few fixes, which was later corrected on the 4870, again, I point to the DX10 debacle.
Starting out as a midrange card isnt good for Intel in that theyve said itll be "the end of the gpu", as they said, or compete on the high end, as they said.
The window is closing on the mid/low end on discrete solutions, leaving only the upper to high end, thats fact, as Haswell and BD will show.
Implementing ATI and LRB is the reason for this, being done SoC.
These solutions arrive within a few years at most, that leaves a tight time window for a great LRB impact, which would do well to make changes in the gaming market to its strengths as a discrete, and that means games we wont ever see by the time the SoC solutions are already here.
The research chip loks compelling, as its TDP may be 80-90 watts on 32nm, and also be ina small package.
If LRB emulates this in positive ways, then LRB has a chance.
I mention the usage of HKMG simply because what it means in the larger picture, as we would all see disastrous results from Intel without its use on their cpus compared to its competitor, adding it will greatly increase the ability of gfx cards in the future, which again, goes towards a time window and perf as well here.
Intel has said qtr 1, but some things are pointing towards late qtr 2, which plays against all this.
The point of the Tera scale and LRB is easy to see, as theyre set to do the same thing, and shows LRB in na positive light by doing so, as the terascale isnt for sale, LRB will be, and doing those things, so yes, its a look here, this isnt what were selling, but its what it can do, much like Fermi, as nVidia says itll be great at gaming, just dont ask, but it will do these things as well, or, a direct correlation as to each'abilities.
Easy to see, and neither is showing game usage, but are heading their respective gpgpu solutions at each other, and its a no brainer to see this as well.
Theyre taking the same approach, talking to the same peoples, in the same forums/groups, then also being defended by some as saying this isnt about gaming, which again, is what Im getting at here.
Anyways, time is the enemy of LRB here, and its that important. Not for gpgpu, but for gaming. The longer its delayed as a gaming solution, same for Fermi, the worse itll be.
Difference is, Fermi will later get the usage of HKMG, whereas therell be no soup for LRB.
The current games, past games, and future games will be done on DX solutions, with LRB having to break that trend, under a tight window, and IMO needs to have a compelling reason for devs to change their past and present approaches, which a mid range card, whether it sells well in a greta price perf ratio is not a great compelling argument for change.
Add in the drivers, the new driver teams etc, yes, its uphill for LRB, no matter what the price, and its positioning in perf is pertinent for its future influence in gaming, which takes a hit down the road with HKMG coming.
So, you may not see it the way I do, thats fine, but to me, its obvious, Intel wouldnt be dragging out the tera scale if LRB wasnt the vehicle to start this revolution, which is on Intels roadmap, we just dont know when
 
JDJ, sorry to not agree but the 3870 was not a severe switch from a 2900. The biggest difference was that it had a 256Bit bus and a bit better thermals. but for anyone who had a 2900 a 3870 was not worth getting.

A 3870 performed barely better than a 2900. It was the same chip. It was a R670, just a revision and not a great one.

A 4870 however was a completely different GPU. Much better than a 3870 or 2900. Hell it was worth it cuz it blows my 2900Pro out of the water.

Oh and you should know Intel by now JDJ. Even when Nehalem was their main focus, they talked about Terascale and LRB. Its just their way.
 
I thought we were talking fixes, not perf, which is why I refered to the 4870 as having the last fixes put in place.
The fixes found on the 3870 were profound, as it placed it within thermals needed, it brought the pricing down to a profitable level and most importantly, doing allthis led to the compute density seen on the 4xxx series, which is what surprised nVidia, and led them to such high pricing on their 200 series, not seeing this coming at all, and just like Intel did the dble chexboogie, instead of going huge and monolithic, so too did ATI go this route, and is whats confounding nVidia to this day.
Thered been no demos for terra for how long? The same time ATI releases its 5xxx series, the same time nVidia shows off Fermi and its gpgpu ability, from out of nowhere comes the terrascale chip.
Sure, its a coincedence sure.
If you want to add more to what I was saying about the 3870, then lets talk about it, but dont assume thats what Id meant, and then add more to boot.
If youd read the earlier link to BSN, Theo refered tothis as well, that sometimes failure brings out a better scenario, and it was nvidias lack of foresight on both smaller and ability thats put them where they are now.
Just as my point of the ever moving perf in gfx, this isnt the cpu field here, and Intel has no wins here
Heres a prime example of how huge a change the 3870 was to the 2900
diesizeah1.png
 

C4PSL0CK

Distinguished
Nov 23, 2009
11
0
18,510


You do know that if Larrabee would have been able to output 1 TFLOPS of DP it would be able to do 2 TFLOPS of SP right? Rest assured it's 1TFOPS of SP.

But yes, TFLOPS means little in games. So what other "facts" indicate that Intel Larrabee will fail in games?

How about I show you it's going into the right direction, conceptional wise I mean?

You see, Larrabee won't only be different in hardware but also in software, more specific: rendering. Currently both ATi and nVidia use IMR (Immediate Mode Rendering) and Intel will be using TBR (Tile-Based Rendering). You might recognize it from a previous GPU. PowerVR Kyro ring a bell Imagination Technologies)? It'll be slightly different but in general you could say it's the same.

Here's the thread with ALOT of info. - http://forum.beyond3d.com/showthread.php?t=11554 - Here's some quotes.

"Oh yes, did I forget to mention that by using multi-sampling AA'ing (like NV's quincunx but without the ghey filter) can be done with virtually no performance cost on a Tiler. 4x MSAA increases your frame and z-buffer access requirements 4x, but if those buffers are on chip like in PowerVR you total extra bandwidth cost is.....zilch." - Dave B(TotalVR)

"Memory is slow AND its doesn't like random access patterns (I misplaced one of my favorite quotes which basically says "RAM is the worst named thing ever"). What TBR do is localise the memory access, by 'doing' a bit of the screen at each moment, they can use very expensive fast RAM." - DeanOC

Here's an article from AnandTech - http://www.anandtech.com/showdoc.aspx?i=1435&p=3 - And another one - http://www.extremetech.com/article2/0,2845,2327048,00.asp -

Alot of advantages. Yet both nVidia and ATi use IMR.

Another thing is maximizing power. Current architectures still consist of alot of fixed function units. They are able to do what they do best very efficiently, however ... Yes, there's a but ... It also has some drawbacks!

This picture shows you the problem. - http://news.cnet.com/i/bto/20080804/intel-larrabee-diagram-4-small-2.jpg

If you take a look at FEAR's workload you see it's rasterization workload is quite larger then in the other games, so naturally the Rasterizer could be a bottleneck for this particular game, in other words > limit FPS. It's the exact opposite way for the Pixel Shader, it could be just sitting there swisting its thumbs, wasted power.

Something like this would never happen on software rendered games on general purpose hardware since the workload would simply be balanced among all different workloads.

Anyway, there's alot more but I have to go. I'll post some more later.

 
Looking at your link, its funnytheyre calling it LRB Hmmmm maybe its Fermi? Or Polaris?

To all those that complained with my thread title etc or whatever, did you check this link, posted above?
http://www.youtube.com/watch?v=ynjYuS1J3jI

My original post in this thread refered to this event, as did one of my links, which also said it was LRB, now click the link here, it seems Intel is claiming it as LRB also.
Am I wrong, or did Intel have 2 showings, and if so, links please?
 

jennyh

Splendid
Btw yomama - maybe you should have a closer look at that top 500 supercomputer list before making a fool of me in your sig?

You've made a fool of yourself. Cray's Jaguar is more than 3x more powerful than the closest intel (which incidentally is half intel half ATI), so I am completely correct in saying that there is no chance whatsoever of intel taking top supercomputer spot for the forseeable future...if ever.
 

C4PSL0CK

Distinguished
Nov 23, 2009
11
0
18,510
During the recently held SC09 conference in Portland, Oregon - Intel finally managed to reach its original performance goal for Larrabee. Back in 2006, when we first got the first details about Larrabee, the performance goal was "1TFLOPS@ 16 cores, 2.0 GHz clock, 150W TDP". During Justin Rattner's keynote, Intel demonstrated the performance of LRB as it stands today.

16 cores? o_O Imagine 32 or 48 (32nm). I wonder if the TDP is still as high as 150W then again Intel's TDP estimations tend to be pessimistic.

But as of SC09, the top five performing products for SGEMM 4K x 4K are as follows [do note that multi-GPU products are excluded as they don't run SGEMM]:
1. Intel Larrabee [LRB, 45nm] - 1006 GFLOPS
2. EVGA GeForce GTX 285 FTW - 425 GFLOPS
3. nVidia Tesla C1060 [GT200, 65nm] - 370 GFLOPS
4. AMD FireStream 9270 [RV770, 55nm] - 300 GFLOPS
5. IBM PowerXCell 8i [Cell, 65nm] - 164 GFLOPS


Source: http://www.brightsideofnews.com/news/2009/12/2/intel-larrabee-finally-hits-1tflops---27x-faster-than-nvidia-gt200!.aspx

So it seems Fermi will get a run for its money.
 
Lets keep personal things out of this please.
I respect your comments and links, and lo and behold, your links explain the title here. It was LRB shown to begin with.
Now, were all trying to find out if a slow 4870 is as fast as LRB or not, but what Im wanting to know is, what about the 5850 or the 5870 or the 5970 or fermi and its arch changes in perf, since obviously the G200 series isnt anything like the G300 series.
What weve seen cant be compared unfortunately, but again, the claims on gfx cards are greater, but also cant be compared as yet, and its what we all want.
If its true, and a 4870 can match a oceed LRB, then its in trouble from the begining, since the projected perf of Fermi is much higher than the 285, just double in gfx perf alone, leaving out all the DP/ECC/gpgpu changes
 
If you chose to respond to the new links about the 4870 or the tiling usage which may be used on Fermi, thats fine, theyre new links, new info, and frankly, your links all already posted anyways, or have been previously read, so yes, I knew a few before you posted, so am I to insult you also?
Lets keep it down to respectable please
This is the link Im refering to
http://forum.beyond3d.com/showthread.php?p=1364495#post1364495

And again, if this was LRB actually shown, as my OP refered to this event, was there the Polaris shown there as well? And are we seeing 2 demos here? Or is LRB the only one shown?