AMD and Intel General Discussion (not for getting help)

Page 26 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Just was trying to bring some reality to the forums is all heheh.
Im sure we'll see some other form of LRB in the future, for sure on SoC, maybe on discrete, but it has to show better than the first one did
 
Im disappointed also, but I did my homework too.
I saw it slipping, more and more.

Maybe their next iteration will be a home run, but showing static and low quality RT better not be in the picture, and some decent raster shots with good fps
 

Ray tracing has been a standard for rendering graphics since at least the 80s.
 
Gaming isn't everything. People seem to think that LRB's future depends on gaming (heck they think everything's future depends on gaming). I wouldn't buy a highly parallel x86 chip for gaming. I'd buy it for productivity software.
 
Some people say a hybrid RT/raster engine is a better approach while others say it's not. It would have advantages though. Rasterisation can produce similar effects to ray tracing with a fraction of the computational requirements, especially with tessellation. It's only real downfall is lighting and how light reacts with different materials.
 
The Federal Trade Commission is investigating Intel for breaking antitrust law, with three of the four commissioners prepared to vote to file a complaint, according to sources who asked not to be named because they were not authorized to speak about the matter. A fourth recused himself, the sources said.

Intel's rivals had criticized pricing strategies like retroactive discounts, end-user discounts and so-called bid buckets.

Intel's competitors in the $280 billion chip market have said that bid buckets, discounts offered to Intel customers facing tough competition from rivals using AMD processors, and end-user discounts, given to customers in hopes of making future sales, are used to push chip prices below the price of production. Retroactive discounts are granted once a company's purchases hit a certain volume.
http://uk.reuters.com/article/idUKN0821666620091208?pageNumber=2&virtualBrandChannel=0&sp=true
 
And we wonder why AMD never made money?
Say I have a product for sale for 100$. I go to my seller and say to them, Ill sell you my 100 dollar item if you dont buy from the competition for 95$, then rebate you back the 2$, so the competition cant sell you any items, because at say 90$, theres no profit, only losses.
So, for a 7$ loss in profits, I control the market, which in volume I more than make up for, since theres no real competition, where if there were, I may have to sell for 92$
 
WOAH!!!! WOW!!! OMG!! GOOD GOD!............ Intel.... Those b'stards!

Just another reason I almost always choose AMD!!!!!!!!!!!!!!!!!!!!!!!!!!!!
 
Pressure Builds on Gate First High-k

Concerns about threshold voltage shifts and other performance problems with the gate-first approach to high-k/metal gate creation may cause GlobalFoundries (Sunnyvale, Calif.) and other members of the IBM-led Fishkill Alliance to shift to a gate-last technique, sources said at the International Electron Devices Meeting (IEDM), going on this week in Baltimore.

"My understanding is that the subsequent thermal steps are causing problems with the gate-first approach," said a senior vice president at Qualcomm Corp. (San Diego) who was attending IEDM. "GlobalFoundries seeks a gate-last approach, and if necessary they could drop in a gate-last module independent of IBM," the Qualcomm executive said.

Asked about the potential switch, a senior IBM technology manager said continuance of the gate-first approach after the current 32/28 nm generation is under review. Any shift to a gate-last approach, if it occurs, would come at the 22 nm node or later. "Both of the gate formation approaches have their problems, and there is no doubt that the gate-first approach is significantly simpler," he said, asking not to be identified. "For IBM, gate first will work well at the 32 nm generation, and I would not underestimate the power of incumbency, which could take it to the next (22 nm) generation. After that, we'll have to see what happens."

At IEDM, a knowledgeable source said GlobalFoundries and nearly all the other members of the Fishkill Alliance will force a shift by IBM to the gate-last approach at the 22 nm node. GlobalFoundries is mulling a switch even earlier, at the 28 nm node coming to market in about a year, he added.
...
A Toshiba technologist assigned to the Fishkill Alliance said the gate-last approach delivers a lower threshold voltage and higher mobilities, particularly on the PMOS transistor. Channel strain is induced when the dummy gates are removed, providing another significant increase in performance.

At the 45 nm node, when Intel introduced its gate-last process flow, the argument was that the gate-last approach required more restricted design rules (RDRs), the Toshiba manager said. Intel was able to restrict the layout of its poly gate lines to one dimension because of its in-house coordination of the process and the design rules. For foundries, the argument went, too many RDRs would inhibit fabless companies from porting chips from a SiON/poly process to a high-k process.

Intel developed a gate-last approach, announcing it in December 2006 for its 45 nm technology. In that iteration, the hafnium dielectric was deposited by atomic layer deposition (ALD), and a sacrificial polysilicon gate was created. After the high-temperature S-D and silicide annealing cycles, the dummy gate was removed and metal gate electrodes were deposited last.

In the second-generation 32 nm gate-last approach, Intel deposits both the dielectric and the metal electrodes last, further avoiding thermal stress to the gate stack. The Intel approach requires careful control of the etching and CMP steps, among others, but delivers a better work function on the PMOS device in particular. "We have it working, as demonstrated by our 22 nm SRAM announcement a few months ago," said Mark Bohr, a senior fellow at Intel.

Although the gate-first approach more closely resembled the process flow of the pre-high-k era, problems have cropped up, Hoffmann said in a Sunday short course presentation. At various technology conferences this year, researchers have discussed a rolloff of the flatband voltage, shifts in the PMOS threshold voltage, and interface layer regrowth. "When the metal sees a high thermal budget, it has an impact on the work function," Hoffmann said. Importantly, the problems created "fundamental issues for mobility, probably due to remote Coloumb scattering. It takes a fair amount of work to improve the quality of the layers to reduce these changes."

The gate-last approach gains a further performance boost from strain induced when the dummy gate is removed, he added.

Another source said the gate-first approach has yield issues. The capping layer is only ~5 Å. Defects are created from debris generated from the capping layers. Those particles impact yields "and can be the difference between profit and loss for a foundry," he said.

 
RDRs are a fact of life, always have been, needs for design within RDRs are paramount, or you take certain risks.
IBM may be forced to go gate last, but time will tell, as GF has to use one or the other, and sees a way for either
 
If you carefully read the IEDM article, it seems GF/AMD's 32nm is not going to be as good as it might have been if they had gone to the Intel-pioneered gate last approach. Besides avoiding trouble with the hafnium during the hgh-temp processing, and consequent yield problems, there's a strain advantage to removing the dummy gate prior to depositing the hi-K gate.
 
"The gate-last approach gains a further performance boost from strain induced when the dummy gate is removed, he added. " From your link.
Im thinkintg its much ado about nothing, as if you design your chip for something else then say you cant use the fabs materials. Thats just being stupid, and embellishing on Intel, when Intels product were already designed for this, so too will all GFs partners, so not sure what theyre trying to say here, and guess what? If they go to TSMC, their designs also have to fit that fab as well, duh!
Other than that, I really dont see the great problems here.
Whats not being talked about here is Intels problems theyve come across, which could be similar, at some point in dev, but Intels just not saying.
When it comes out we will know. Anytime you go to a new process, even without adding different doping techniques, you can have yield issues, its just part of the business, much like weve seen both Intel and GF using 45nm doing quite well, whereas on their simple shrink at 40nm, TSMC has struggled

 
Nobody said there were "great problems" - just a bunch of inconvenient ones as outlined in the article. And yes I'm sure Intel investigated both approaches before deciding the gate-last was the better solution, particularly at 22nm and below, where Intel has already demoed working SRAM chips, as opposed to IBM's cell or two.

Reason I mentioned this is because of all the fanfare concerning how GF & AMD were going to realize the Hi-K gains on 32nm that Intel got on 45nm... That may not necessarily be so. I guess GF is finding out that Hi-K aint' so easy to do, much like Intel with GPUs 😀.

@Badtrip - yes I expect we'll see lots of tri-core and dual-core blue light specials with GF's 32nm, at least the first few steppings 😀.
 
Status
Not open for further replies.