Intel Makes 22nm 3-D Tri-Gate Tech for Ivy Bridge

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I seem to be the only one confused at how the cooling plate, cooling system mounts to these chips. Is it like Legos? Does the plate have microscopic bulges to conform to the chips 3D factor?
 
I'm not sure that Intel could operate without AMD being around to compete. They'd have a (complete) monopoly for a start, and what about x86-64? They're still going to need to licence it (and they can't exactly buy AMD out).

AMD isn't going anywhere, but they'll still be manufacturing 32nm CPUs when Intel is already onto 22nm. GloFo needs to step up to the plate.

One thing I want to see is an end to the habit of making hungrier and hungrier CPUs. This new technology could be a big step in the right direction - great performance whilst turning the wick down.
 
[citation][nom]GeekApproved[/nom]Intel is going so fast, that the latest thing only lasts 3 months now, what a joke. Your going to need 3 different motherboard sockets a year at this rate. Another reason why AMD could care less about the high end desktop market, it's very small and they are moving in other directions.[/citation]
Very true. But I do hope they continue to push the envelope for high end desktops because it's a blast to upgrade. I left the AMD camp for the first time since my overclocked k6-2 when core2 came out. I rock both AMD and Intel still though PII 955 and core2 duo E6750. I do wonder if Intel will make socket 1155 last longer than the last one. After all Ivy is supposed to be just an die shrink of sandy.
 
ZOMG, a competitor is releasing an innovative product. There's no way in hell we too can work on some amazing project. Time to file for bankruptcy.

... seriously guys? Intel announces something and it's the end of AMD?
 
[citation][nom]legacy-za[/nom]I seriously hope nVidia and AMD are paying attension, imagine this technology on your GFX cards?[/citation]

Won't matter when all of our games are crappy old tech console ports.
 
[citation][nom]mrmotion[/nom]There isn't competition now(sandy bridge anyone?). Yet innovation is still moving us forward.[/citation]

Only because Intel doesn't know exactly what to expect from Bulldozer. If AMD dies or Bulldozer turns out to be a dud then you can bet that Intel is going to drop half its R&D budget and the successor to Ivy Bridge may be all you have for a while.

Partly why AMD keeping the lid on Bulldozer is a good thing is because it keeps Intel guessing. As soon as they feel secure it will all come to a crashing halt. How do you think Athlon beat Pentium when Intel is a massively bigger company? Athlon reinvigorated Intel R&D and that's why we even have Ivy Bridge coming.
 
The fanboy response to this news reminds me of when Intel's Terascale project was demonstrated to the world, and they showed an alleged 80 core, 100 million transistor CPU that allegedly could OC to 5ghz and score 2+ teraflops of power. Obviously, it was lies, exaggerations, etc... and it certainly didn't revolutionize anything. It was good for a few billion spent on research, and a chance for fanboys to wank it, that's it.
 


I think Intel now stands a very good chance of being really competitive in the cellphone/tablet space, esp. if they get a 16 EU integrated GPU with stacked DDR2 memory on the chip itself. So maybe it's ARM that needs to look out for 'death from above' 😀.
 
That was one seriously impressive demonstration, not only can an Ivy Bridge laptop play a movie back, and serve itself a simple webpage, it can play iRacing, which has these minimum requirements:

Windows XP, Windows Vista, or Windows 7
Hyperthreaded Intel CPU, AMD Athlon 64 CPU, or any dual-core CPU
128MB Pixel Shader 2.0 (ATI 9700Pro or nVidia 6600 or better); 256 MB Pixel Shader 3.0 (ATI X1600 or nVidia 6800 GT/GS or better) graphics adapter recommended
1 GB system RAM

/sarcasm


If Ivy Bridge was going to have revolutionary performance, I think they would've done a demo that shows it... Much like AMD's demonstration of Llano wowed and amazed by playing a demanding DX11 game on an IGP.
 
[citation][nom]TA152H[/nom]AMD getting mounted by Intel is nothing new, but we should keep it in perspective. Bulldozer is inferior in pure performance, but should do well in performance per die size in highly threaded applications. It's better than where they are now. Also, don't underestimate IBM and their own research. They alone make faster microprocessors than Intel. AMD will have access to this technology. So, while there's no doubt this is an important announcement, it's not the end of the world for AMD. We don't know how easy this will be to make, we don't know what IBM has up its sleave, and we already are in a situation where AMD can't compete in performance, but still is surviving.[/citation]

sorry, where are you getting the performance numbers from exactly? There are still 0 benchmarks with regards to bulldozer and anything regarding performance is still speculation. 22nm is a big step for PC/server CPUs but it's been planned. As for the 'AMD is dead' remnarks: they're nearly done with 28nm and apparently the plant can produce even smaller.

http://www.xbitlabs.com/news/other/display/20110422111145_AMD_Expects_Two_28nm_Tape_Outs_This_Quarter.html

[citation][nom]fazers_on_stun[/nom]I think Intel now stands a very good chance of being really competitive in the cellphone/tablet space, esp. if they get a 16 EU integrated GPU with stacked DDR2 memory on the chip itself. So maybe it's ARM that needs to look out for 'death from above' .[/citation]

The issue with these low nm processors is that they're very costly to produce. They do provide lower power consumption, but bear in mind they can't be produced at high enough quantity and low enough price to ever make it into a phone or tablet. Basically they're still years behind ARM in terms of power consumption. Currently, x86 isn't something that can be scaled appropriately for either market and respective price range. Right now it's simply not feasible.
 
as impressive as this sounds, it just sounds like Intel found a good way to get to 22nm and below, reading the title i thought Intel found a way for a single transistor to act as 3 transistors, but really this only allows for smaller more efficient or faster chips (hard to say for sure the explanation was really kindergarten and really gave no real details), makes it sound like Ivy Bridge is just Sandy Bridge OCed

IIRC FinFETs (3D tri-gate transistors) are just one transistor with a 3D tri-gate structure. When transistors shrink to really small dimensions like 22nm, you start getting large (relatively speaking) leakage between the gate & channel (which is what the HKMG technology that Intel perfected on 45nm combats), and also source-drain (channel) leakage which is what the tri-gate structure combats. Without it, a 22nm transistor cannot fully "turn off" at the low gate voltages used, which wastes power and generates unwanted heat.

and for those who think it's a nail in AMD's coffin think again, AMD has nearly always clocked slower then Intel, they make up for it with optimization, which gives the bonus that when they do figure die shrink out they get even more bang

Assuming you mean chip design by "optimization', AMD is just now going to a 4-issue core with Bulldozer, which is what Intel "optimized" with Core2 back in 2006. Also, this would be Global Foundries having to come up with some equivalent fab process & design, which is probably not too easy seeing as how it took Intel some 9 years to accomplish at the necessary mass production yields, quantity and economy. Plus I imagine Intel has lots of patents covering the process, and I don't think those would be included with the cross-licensing agreement Intel has with AMD, since GLoFlo is now a separate company with a foreign majority ownership. GloFlo apparently has its hands full with 'gate first' HKMG which is what IBM pushed down on the consortium, which you can thank for Llano getting pushed out half a year late, and from what I've read, GloFlo and Samsung and the other consortium members are going to switch to Intel's 'gate last' method with 22nm. So basically they will be on their first-gen HKMG at 22nm when Intel is on 4th or maybe 5th gen HKMG.

While GloFlo can make 22nm SOI transistors sometime in the future without tri-gate tech, they will be at a serious disadvantage if what Intel says is true about the ~40% performance advantage it gives them compared to their mature 32nm process.

 


I see that argument all the time, and really it doesn't make any sense. Intel already has over 80% of the CPU market (and over 90% of the highly lucrative server CPU market where Intel already charges a hefty premium over the equivalent AMD CPUs). Really, Joe Public doesn't even know AMD exists - all they know is what they see at Best Buy or Walmart, and could care less about what brand CPU is inside. So if Intel jacked prices up double or so, they would simply buy a less powerful model at the same price they expected to pay.

Intel might raise prices a modest amount, but I'm sure they have done extensive marketing analysis that tells them where the sweet spot is - where they can maximize profits by balancing price and volume. Intel already charges $1K for their top desktop CPU, and I doubt they would raise that price much if AMD went belly-up - they don't sell that many as it is, why kill the remainder of the market?
 
It is really impressive, but I think it will be mainly revolutionary for the mobile market, rather than the desktop market. Need we forget that the number of cores in a CPU now matter and that low-voltage efficiency have never been a big selling point to gamers. How much will this increase clockspeed with higher voltages?

Sandy Bridge offered quite a big performance boost, and it seems every new generation will be following that suit, so why wait?

In the mobile market however, where size and efficiency matter most, these new chips will continue the path that we have been on for decades and will open up new realm of product designs never before conceived of. Forget smartphones and tablets -- what comes next? That is what is exciting to me.
 
[citation][nom]fazers_on_stun[/nom]I see that argument all the time, and really it doesn't make any sense. Intel already has over 80% of the CPU market (and over 90% of the highly lucrative server CPU market where Intel already charges a hefty premium over the equivalent AMD CPUs). Really, Joe Public doesn't even know AMD exists - all they know is what they see at Best Buy or Walmart, and could care less about what brand CPU is inside. So if Intel jacked prices up double or so, they would simply buy a less powerful model at the same price they expected to pay.Intel might raise prices a modest amount, but I'm sure they have done extensive marketing analysis that tells them where the sweet spot is - where they can maximize profits by balancing price and volume. Intel already charges $1K for their top desktop CPU, and I doubt they would raise that price much if AMD went belly-up - they don't sell that many as it is, why kill the remainder of the market?[/citation]

Coming from a company who had to pay off OEMs so they wouldn't buy/use AMD products? A company who issues 1-2 chipsets per motherboard and 0 backward compatibility? You're right in the sense that the average consumer is completely uneducated with regards to their choice of hardware, but Intel knew that and that's why Intel paid off OEMs when they were getting spanked. AMD is still suffering for it. How you have faith that intel keeps the consumer in mind is absolutely beyond me. But Maybe your just a fan of the black eyed peas? http://www.tomshardware.com/news/Intel-Will.i.am-Creative-Director-Will.i.am-Music-black-eyed-peas-bep,12075.html

AMD isn't going anywhere. Not for a while. But if AMD did go belly-up Intel would almost certainly quit pushing the envelope as hard and fast as they are now and would certainly charge even higher for their products.
 
[citation][nom]pelov[/nom]sorry, where are you getting the performance numbers from exactly? There are still 0 benchmarks with regards to bulldozer and anything regarding performance is still speculation. 22nm is a big step for PC/server CPUs but it's been planned. As for the 'AMD is dead' remnarks: they're nearly done with 28nm and apparently the plant can produce even smaller. http://www.xbitlabs.com/news/other [...] arter.htmlThe issue with these low nm processors is that they're very costly to produce. They do provide lower power consumption, but bear in mind they can't be produced at high enough quantity and low enough price to ever make it into a phone or tablet. Basically they're still years behind ARM in terms of power consumption. Currently, x86 isn't something that can be scaled appropriately for either market and respective price range. Right now it's simply not feasible.[/citation]

Actually, there are some numbers out there, but they don't really matter much anyway. The memory performance in those benchmarks was really bad, and is no doubt just an example of pre-release hardware not being fully working.

But, you can look at the design, and you can hear what AMD is saying. The design is not going to come close to the single-threaded performance of Sandy Bridge, and not even AMD is saying it will. AMD chose to add a second set of integer units. This only added a small amount of size, but in applications that are well-suited for it, a lot of performance.

Will it be faster on integer than Intel's Sandy Bridge using hyperthreading when two threads are running on each "module"(AMD) or core? I can't prove it, and I'm not 100%, but based on the execution resources available, there's a very reasonable expectation that Bulldozer should be better on these types of applications.

That's not a criticism of Sandy Bridge. Intel did a magnificent job on it. It's going to be better at a lot of things than Bulldozer. But, by the same token, Bulldozer is made to do one thing very well, and you can be fairly sure Bulldozer will be very competitive, and I would guess even better, at that niche.

Also, you should expect Bulldozer to clock higher. It was designed with higher clock speeds in mind. Maybe not initially, because it's all new, but when it starts stretching its legs, you're going to see high clock speeds. There are a lot of things one can tell from designs, and if you look at the design of this processor, these things seem very, very likely. But, nothing is 100% until we actually see it.
 
[citation][nom]TA152H[/nom]Actually, there are some numbers out there, but they don't really matter much anyway. The memory performance in those benchmarks was really bad, and is no doubt just an example of pre-release hardware not being fully working.But, you can look at the design, and you can hear what AMD is saying. The design is not going to come close to the single-threaded performance of Sandy Bridge, and not even AMD is saying it will. AMD chose to add a second set of integer units. This only added a small amount of size, but in applications that are well-suited for it, a lot of performance. Will it be faster on integer than Intel's Sandy Bridge using hyperthreading when two threads are running on each "module"(AMD) or core? I can't prove it, and I'm not 100%, but based on the execution resources available, there's a very reasonable expectation that Bulldozer should be better on these types of applications. That's not a criticism of Sandy Bridge. Intel did a magnificent job on it. It's going to be better at a lot of things than Bulldozer. But, by the same token, Bulldozer is made to do one thing very well, and you can be fairly sure Bulldozer will be very competitive, and I would guess even better, at that niche. Also, you should expect Bulldozer to clock higher. It was designed with higher clock speeds in mind. Maybe not initially, because it's all new, but when it starts stretching its legs, you're going to see high clock speeds. There are a lot of things one can tell from designs, and if you look at the design of this processor, these things seem very, very likely. But, nothing is 100% until we actually see it.[/citation]

That was an engineering sample of a server CPU and it was clocked at 1.8ghz... AMD doesn't ever release benchmarks prior to a big release. So those looking for benchmarks before launch are out of luck.

Shall we start looking back on other engineering samples and how they faired with respect to the official chips? Or do you get my point?

Also, it's no secret AMD prefers the 'more cores' approach, either. And bulldozer was, from the start, more focused on the server than the desktop. The "true" desktop bulldozer is the trinity APU and that's early 2012.
 


IIRC that's discrete GPU chips, and they're not going to be on the market until next year most likely. AMD's CPUs are strictly SOI, not strained silicon which is what TSMC and GloFlo use on the half-nodes like 40nm and 28nm (and Intel on every node).

The issue with these low nm processors is that they're very costly to produce. They do provide lower power consumption, but bear in mind they can't be produced at high enough quantity and low enough price to ever make it into a phone or tablet. Basically they're still years behind ARM in terms of power consumption. Currently, x86 isn't something that can be scaled appropriately for either market and respective price range. Right now it's simply not feasible.

Too expensive -- where did you come up with that?? The tri-gate FinFET is a process design, to be used on all 22nm production, so the economies of scale kick in (esp. in Intel's case as they have 80% of the x86 CPU market). Plus given the tiny size that an Atom SoC chip would use, Intel can pack a huge number on one 300mm wafer, or even more when Intel switches to 450mm wafer production. So the R&D and fab capex will be spread across probably a billion or so chips by the time Intel moves on to something else.

IMO, Intel has been fairly ambivalent about the ultra-mobile market, not wanting to cut into their lowend laptop sales, up until a year or so ago. Now they seem quite serious about it, with Cedar Trail out by the end of the year, which will probably be very competitive with ARM. I'd say it is quite likely Intel will take a large portion of the cellphone/tablet CPU market in the next few years. Intel is leveraging their world leadership in process technology and just imagine the performance and power management of Cedar Trail, if on 22nm instead of 32nm.
 
So I'm not exactly a computer genius, but the 3D transistor sounds reminiscent of Hyperthreading(no, not the same I know that much), but all the information going through the transistor needs to be parallel for the application to see a true performance increase no?
 
Status
Not open for further replies.