Intel Makes 22nm 3-D Tri-Gate Tech for Ivy Bridge

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
sign me up! gpu's need this tech to over come there horrible power consumption and up the anti with the ghz maybe closer to 2 ghz graphics core clocks if i were only god of tech world lol
 


No, I have no illusions about Intel keeping the consumer's interests foremost in mind, and the same goes for AMD. Remember that when AMD had the performance crown with K8, they were charging upwards of $800 for their top-end desktop CPUs. Just ask Baron Matrix about his theory that Intel killed the world economy when they came out with Core2 and seriously undercharged for it, compared to AMD's then-current pricing. 😛.

What I do have faith in, is that (1) Intel isn't stupid enough to kill the market for consumer PCs when people could just switch to something like an ARM tablet or nettop, if it cost too much to buy an Intel-powered PC. Sure they would be giving up performance but what do you think motivates the economy car market?? Just look at how many cars Rolls Royce sells and their total profits, vs. GM or Toyota (hint: the latter two do orders of magnitude more business, and profits, that RR got from building cars, which is one reason they also got into aircraft engine manufacturing - go where the money is if you want to charge a lot).

And (2) I'm absolutely certain that if Intel were a true monopoly, and not merely a de facto one, quite a few governments incl. that of the US would be looking over their shoulders at any evidence of gouging or overpricing, and the consequences would be extreme as Intel is painfully aware of..
 
[citation][nom]fazers_on_stun[/nom]IIRC that's discrete GPU chips, and they're not going to be on the market until next year most likely. AMD's CPUs are strictly SOI, not strained silicon which is what TSMC and GloFlo use on the half-nodes like 40nm and 28nm (and Intel on every node). Too expensive -- where did you come up with that?? The tri-gate FinFET is a process design, to be used on all 22nm production, so the economies of scale kick in (esp. in Intel's case as they have 80% of the x86 CPU market). Plus given the tiny size that an Atom SoC chip would use, Intel can pack a huge number on one 300mm wafer, or even more when Intel switches to 450mm wafer production. So the R&D and fab capex will be spread across probably a billion or so chips by the time Intel moves on to something else. IMO, Intel has been fairly ambivalent about the ultra-mobile market, not wanting to cut into their lowend laptop sales, up until a year or so ago. Now they seem quite serious about it, with Cedar Trail out by the end of the year, which will probably be very competitive with ARM. I'd say it is quite likely Intel will take a large portion of the cellphone/tablet CPU market in the next few years. Intel is leveraging their world leadership in process technology and just imagine the performance and power management of Cedar Trail, if on 22nm instead of 32nm.[/citation]

The 28nm is for GPUs and fusion chips (namely for the mobile market) where they're already going on half-node.

And yes, it will still be too expensive and consume way too much power to replace anything ARMy. RISC architecture is still a requirement due to batteries not having any significant improvements in years (and not many in the pipeline), and the x86 riscy/ciscy things that AMD/intel make are too big, power hungry and expensive. A tegra 2 costs ~$20-30, and that's with the licensing fee paid to ARM.

If cedar trail were on 22nm rather than 32nm?
http://www.anandtech.com/show/4295/intel-cedar-trail-platform
it's still crappy, but maybe it would be more suitable at 22nm. With the atom intel tried to test the waters but failed to deliver on the graphical front (important to note the atom couldn't play back HD video smoothly). On the CPU end people still complain that it's too slow. The new cedar trail will probably be even slower in single threaded tasks, but they're finally beefing up the graphics. Both AMD and Intel are still years behind getting x86 down to an appropriate level. And even if they do, so what? x86 doesn't provide the same benefits as it did when it was first introduced, especially on a mobile platform.
 


Heh, "shall we start looking back" at the '40% more performance in a wide variety of workloads' hype that AMD's VP Randy Allen was touting some 6 months before Barcelona's release? That was about 4 years ago, and I can easily dig up the Youtube vid of Allen's making those exact comments, as I have done several times previously here on THG for those with short-term memories 😛..

Shall we start looking back on other engineering samples and how they faired with respect to the official chips? Or do you get my point?

Hmm would that include the Barcie TLB bug, where the production chips underperformed the ES chips after the software fix killed perf. by 10%?? 😀

Also, it's no secret AMD prefers the 'more cores' approach, either. And bulldozer was, from the start, more focused on the server than the desktop. The "true" desktop bulldozer is the trinity APU and that's early 2012.

This I agree with, although with the long pipes and high clock speeds, BD might do very well in IPC single-core performance if they have improved their branch prediction as much as they say they have. Mispredictions and having to flush all those stages in the pipes is what killed Netburst compared to K8. I guess the gaming benchmarks will show the truth..
 
[citation][nom]jezzarisky[/nom]So I'm not exactly a computer genius, but the 3D transistor sounds reminiscent of Hyperthreading(no, not the same I know that much), but all the information going through the transistor needs to be parallel for the application to see a true performance increase no?[/citation]

No. The code flowing through the logic has no bearing. All you need to know is the little switches inside that make the 1's and 0's just got a lot faster and use much less power.
 
Fazers, that was my point. Don't trust statements or claims made prior to benchmarks and real tests as they can be completely off. The same holds true for engineering samples. But this is AMD we're talking about and you and I both know we won't even get a whiff of what they've prepared until it's officially launched.

Will BD perform 30% faster than the core i5 950? Who knows. I don't and you don't, and I'd be willing to bet nobody on these forums does either. Predictions are useless when based on rumor and 'official statements', especially where CPUs are concerned.
 


Well once you get past the decoders, modern x86 CPUs are all RISCy anyway. And Atom doesn't do OoO decoding anyway, which makes it less power-hungry as well as less powerful than Zacate.

If cedar trail were on 22nm rather than 32nm?
http://www.anandtech.com/show/4295/intel-cedar-trail-platform
it's still crappy, but maybe it would be more suitable at 22nm. With the atom intel tried to test the waters but failed to deliver on the graphical front (important to note the atom couldn't play back HD video smoothly). On the CPU end people still complain that it's too slow. The new cedar trail will probably be even slower in single threaded tasks, but they're finally beefing up the graphics. Both AMD and Intel are still years behind getting x86 down to an appropriate level. And even if they do, so what? x86 doesn't provide the same benefits as it did when it was first introduced, especially on a mobile platform.

Yes, I was just reading that AT article before posting previously in this thread. And it compares current dual-core 45nm Pine Trail Atom performance to Zacate (Brazos), which uses about twice the power (18 watts IIRC vs. 10 watts), and then extrapolates to Cedar Trail. Not sure how valid that will turn out. And no comparison to ARM CPUs that I saw.

Anyway, I suspect that battery life will be something of a secondary concern to those iPhone wannabees who have demonstrated they will live with less than a day active use as long as it has decent standby and able to play HD video and Angry Birds simultaneously 😛..
 
yap Intel is pushing really hard with their advances and they don't stop impressing me every time,but this time they really worked it hard.I hope AMD can start being a bit more competitive again with their chips.they have been on the bottom for a long while.
 
[citation][nom]fazers_on_stun[/nom]Well once you get past the decoders, modern x86 CPUs are all RISCy anyway. And Atom doesn't do OoO decoding anyway, which makes it less power-hungry as well as less powerful than Zacate.Yes, I was just reading that AT article before posting previously in this thread. And it compares current dual-core 45nm Pine Trail Atom performance to Zacate (Brazos), which uses about twice the power (18 watts IIRC vs. 10 watts), and then extrapolates to Cedar Trail. Not sure how valid that will turn out. And no comparison to ARM CPUs that I saw. Anyway, I suspect that battery life will be something of a secondary concern to those iPhone wannabees who have demonstrated they will live with less than a day active use as long as it has decent standby and able to play HD video and Angry Birds simultaneously ..[/citation]

Disagree with you. Some of these ARM-based SoCs have lower load power draw than the old atoms idled at. The TDP always gives a rough number and doesn't mean much other than a rough guesstimate. For instance, the e-350, though rated at 18W, actually drew less than comparable Atoms and significantly less under load even though some of them had a lower TDP. Real world performance > *

My thoughts about RISC/CISC and x86 going to phones/tablets is best described as Intel and AMD trying to climb up a mountain. But instead of going up the easy route (RISC), they decide to walk up the steep side. They have enough money and fame to rent a helicopter and simply jump out once at the peak, but they choose to climb that steep and jagged cliff with an old donkey who has 2 broken front legs who's carrying an incredibly cherished but heavy and completely unnecessary old ice cream maker (x86). The reason they're carrying it is because no one else up there has one so they can carry on with their monopoly on ice cream but now at 24 thousand feet. Of course, nobody else brought one because they realized it was completely unnecessary and idiotic, but anything for money, right? :)
 
Impressive technology, so it appears that Ivy Bridge isn't just a die shrink of Sandy Bridge to 22nm process technology, but also an adoption of a new 3D transistor technology.
 
[citation][nom]GeekApproved[/nom]And for those of you fanboys that make fun of AMD, your gonna wish to god they were back when you see how much Intel is going to rape you for on a cpu when there is no competition from AMD.[/citation]

there have been no competition from AMD ever since core2 was released.

now look what we get, unlocked i7-2xxxK unlocked cpuS that are non EEs @ 25% the price.

if that's how intel is raping me, i'd let it do me all decade long baby.

and those that actually believe that the phenom2 was a worthwhile adversary even with it's "netburst-like" life span is kidding himself.

if you fool yourself that phenom/2 was worthwhile you're also admitting netburst was good.
 
Ivy Bridge will be essentially a downsize of Sandy Bridge, but the ability to increase transistor count will allow Intel to cram 6 core chips into the same thermal envelope as 4 core sandy bridge chips.
AMD has never been a performance competitor to Intel, their strategy is to release dark horse mid performance CPUs very infrequently, at prices low enough to give them a price/performance benefit even when the raw performance doesn't hold a candle.
People seriously talking about ARM in desktops don't understand the architecture. An ARM CPU has little power use because it is designed around a minimal instruction set and simply doesn't have the capabilities that desktop CPUs have.
 


Yep, I now see that mentioned in AT's article - wonder why AMD set the TDP so high? I suppose they didn't care too much since under Dirk Meyer they weren't aiming at the ultra-mobile market anyway.

My thoughts about RISC/CISC and x86 going to phones/tablets is best described as Intel and AMD trying to climb up a mountain. But instead of going up the easy route (RISC), they decide to walk up the steep side. They have enough money and fame to rent a helicopter and simply jump out once at the peak, but they choose to climb that steep and jagged cliff with an old donkey who has 2 broken front legs who's carrying an incredibly cherished but heavy and completely unnecessary old ice cream maker (x86). The reason they're carrying it is because no one else up there has one so they can carry on with their monopoly on ice cream but now at 24 thousand feet. Of course, nobody else brought one because they realized it was completely unnecessary and idiotic, but anything for money, right? :)

I've read recent articles that x86 is not as inefficient as you suggest, esp. with SSE, AVX and other SIMD "bolt-ons" including Sandy Bridge's Quick Sync. I agree it is more than past time for Intel to ditch the legacy instructions (who cares about 16-bit real mode anymore?). Interestingly enough, most of the top 500 supercomputers in the world use x86 CPUs, at least as the control nodes, and dedicated vector processors (i.e., GPUs) as the computational nodes in the hybrid computers.

With Microsoft announcing they will natively support ARM processors I guess it'll be a completely level playing field, so we'll watch and see which architecture wins. There is some precedent however - Apple dumped PowerPC (RISC) for Intel a few years ago, albeit at much higher performance and power levels than what ultra-mobile uses. But as the performance vs. power curve rises with each node, I think Intel will be in the lead, if not at 22nm certainly by 14nm..
 
Its so funny when bone crushing blows are delt, the resident amd guy bobdozer is missing.

AMD is so far behind its not even funny. I dont even need to see any benchmarks of bulldozer.
 
Just a question to all the Intel fanboys and AMD haters, why waste your time commenting on things you don't like? Normal people stay away from things they dislike, yet you guys show up in droves to display your illogical emotions. Did AMD steal your girlfriend? Sleep with your mom? Wreck your car? It just seems weird thats all. Not a fanboy, more of a observer of peculiar human behavior.
 
I knew this technology was in development at Intel, but I had no idea it would be production ready so soon. Well, Intel wasn't lying a couple weeks ago, this is pretty revolutionary. AMD and Global Foundries seem to be slipping further behind in process tech with every node shrink since 90nm.
 
[citation][nom]Twist3d1080[/nom]Normal people stay away from things they dislike, yet you guys show up in droves to display your illogical emotions. Did AMD steal your girlfriend? Sleep with your mom? Wreck your car?[/citation]
You're trying to criticize others for displaying irrational emotions?
[citation][nom]Twist3d1080[/nom]Not a fanboy, more of a observer of peculiar human behavior.[/citation]
Wow... great save.
 
[citation][nom]cobra5000[/nom]Man some of you guys are hittin the Intel Kool-Aid pretty hard!Please remember, this is a commercial.[/citation]
It's not just a baseless advert. The benefits of this technology have been theorized and discussed for as long as I can remember, and there's quite a bit of documentation to back up its potential. Anandtech goes a bit more in depth about the technology, you can read about it here:

http://www.anandtech.com/show/4313/intel-announces-first-22nm-3d-trigate-transistors-shipping-in-2h-2011
 
[citation][nom]pelov[/nom]sorry, where are you getting the performance numbers from exactly? There are still 0 benchmarks with regards to bulldozer and anything regarding performance is still speculation. 22nm is a big step for PC/server CPUs but it's been planned. As for the 'AMD is dead' remnarks: they're nearly done with 28nm and apparently the plant can produce even smaller. http://www.xbitlabs.com/news/other [...] arter.htmlThe issue with these low nm processors is that they're very costly to produce. They do provide lower power consumption, but bear in mind they can't be produced at high enough quantity and low enough price to ever make it into a phone or tablet. Basically they're still years behind ARM in terms of power consumption. Currently, x86 isn't something that can be scaled appropriately for either market and respective price range. Right now it's simply not feasible.[/citation]

I 1+ you without realizing that you had posted on two topics. As for the second part, you are dead wrong. Newer chips use less and less silicon all the time. They are in fact, much cheaper to produce. Pricing has gone down over time, yet the revenue for these chip companies is ever-increasing. They make more and more every year, and that's not just from increasing sales volume.
 
AMD joint venture with IBM paid off with SoI but i guess that't not any where near as spectacular as Tri-Gate, i guess AMD should have made some kinder garten infomercial like this one

@fazers_on_stun
Athlon XP clocked lower than Intel but was able to keep up with the P4 performance, that could only be achieved by architectural optimization, clock for clock Athlon XP was able to get more relevant instructions executed then P4. but i do agree with you AMD has fab issues (the capital required to fab silicon is ridiculous and the main reason why there is only a handful of silicon fab plants out their, but then again ARM seems to be doing quite nicely and they dont got a fab plant to their name), the fact that AMD was able to maintain performance for their gfx card on par with nvidia while remaining at 45nm without growing the die size has to be a testament to the kind of architectural optimization that AMD can do. As for the 4 core thing, last i checked i dont see a decent i5 with less then 4 cores....
 
While tons of these innovations come out, its not exactly doing much other than the area of PR for now mostly. Intel's 2nd gen i3/5/7 barely have motherboards on par wit the first gen and, at most, a modest improvement of roughly 10% with lower pcie support.

I mean yea you can improve the technology but if the mobo's for enthusiasts or models for stock laptops/desktops aren't there, seriously what is the use other than saying "this is what we can do but you can find it fully on the market in 2 years".
 
Status
Not open for further replies.