AMD's 65nm is perfect!

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

K8MAN

Distinguished
Apr 1, 2005
839
0
18,980
I know there are always time and money limitations but sometimes they are just taken too seriously. When developers are too concious of deadlines rushed games like Battlefield series arise where the game is is buggy and the patches only make it worse. Besides, in the case of SSE optimizations all a developer needs to do is to click a checkmark before pressing compile. I can understand a game developer being hesitant with the latest instruction set like SSE3 breaking his code, however unlikely, but older intruction sets like SSE or even SSE2 have long been ingrained in compilers.

"In discussions with game developers over the past few years, I've learned that they tend to be pretty wary of automatic optimizations generated by simple use of compiler switches. Sometimes a large software build will break when certain automatic optimizations are turned on. Some of this is likely institutional memory, as compilers have improved over the years."

There really isn't any reason why developers shouldn't at least activate the original SSE instruction set which has been around since 1999. Even AMD processors would benefit from that.

Sry I was just generalizing but endyen summed it up very well.
 

mpjesse

Splendid
Yeah... smaller process doesn't necessarily translate to less heat. Prescott was a big deal to Intel though- they saved a ton of money on silicon wafers.

A lot of people forget the monetary benefits to a smaller process. Those wafers cost a fortune... the more chips they can fit the less they have to spend on silicon.

Of course, the equipment change to 65nm is fortune too. The latest number I heard was 4 billion to switch to 65nm for all of intel's logic FABs. Ouch!

-mpjesse
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Note of warning: This reply is to a number of people to conserve space and save time.

That is funny. A prescott on any process is still a big leak. Well maybe not FD SOI, but even then, @ 65nanos, it would still leak badly. Just too many interconnects, with too many dissimilar charges next to each other.
I don't think I ever argued that. Scotty itself was such a bad design for other reasons as well. A Scotty on any process is still Scotty. :lol: But my point is that in switching processes, Intel has to redesign the core. Maybe they'll take that opportunity to fix a few things.

I have also heard that Amd has an automation program that allows them to transition seamlessly between 90 and 65 nanos. This also suggests that Amd is already 65 nano capable. Since they have offered the tech to SMC, or one of the other chip giants, I'd guess it must work.
Just because they have an automation program that can 'transition seamlessly' doesn't mean that when they do transition, it will be seamless. And it also doesn't mean that their cores are redesigned for 65nm yet. Time will tell. That's all I'm saying on that.

I remeber when prescott was supposed to be king shit and Spud @ the time had listed an entire page of improvements out of his head(he had really high hopes) and then a few days before release the pipline increase info was leaked and then the final product only for reviewers to find that it was actually slower overall and especially bad in games. :lol:
I remember that too. Hell, we all thought that Scotty would do better than it did. I mean there were tons of improvements. Intel just unfortunately squashed that advantage with tons of bad design. :? It was a sad day indeed. Scotty could have been soooooo much better if Intel had just stuck to fixing the problems with Northwood instead of screwing around with ... everything.

slvr_phoenix may have blaimed the 90nm process, but the failure of the P4 was just due to the Prescott architecture
Umm ... I didn't blame the process. I'm just using the process as a marker for the time period. Though the process itself does have it's problems, it's the bloody crap redesign done to the Scotty core that made it really suck. Never let it be said otherwise. :mrgreen:

specifically the pipeline increase
Actually, that had a pretty minor effect. I'm not even sure if I'd call that a bad decision. It was really what Intel did to Scotty's cache latency and misprediction handling that screwed Scotty badly. The pipeline increase didn't help, but it's far from the major contributor to Scotty's suckiness.

There really isn't anything wrong with the 90nm process when compared to the 130nm or any other Intel process.
That's actually not true. 90nm was the point when leakage became a serious problem. It was much worse than predicted. AMD was smart by starting to implement SoI. Intel ... not so smart. So there really was something wrong with the 90nm process itself. That something is leakage. And that something will get worse and worse as each process gets smaller. This however is balanced by adding new things to the process such as strained silicon, low/high-K dielectrics, SoI, internal carbon nanotube heat channels, etc. .13 was the magic number. Everything is downhill from there.

It seems that game developers, in a rush to get the game out of the door, usually fail to optimize the code for the latest instruction sets.
It has nothing to do with being in a rush to get code out the door. It has everything to do with compatability and debugging. You don't want to alienate a giant market segment by requiring something like SSE3. So then you have to either not compile with it at all, or produce multiple branches that all theoretically do the same thing, but with different feature sets. But because there are different branches, you then have to test them all with different hardwares, and when a bug pops up, you first have to track down which branch it's even in. It's a royal pain in the butt. Which is why most people won't optimize nearly as much as they can. It just costs too much to be worth it.

Now people may feel that optimizing code for the Pentium IV would penalize AMD, but that may not be the case.
Actually, it is. If you're talking about just optimizing for instruction sets, then not so much. But if you're talking about optimizing for actual architectural differences in a P4, then if sure will penalize AMD. One of the most notorious of such examples is the bitshift optimization. It is (was?) a commonly used cheat to use bitshifting to perform certain multiplication and division functions because in the P3 a bitshift operation was blindingly fast compared to a multiplication operation. But Intel made bitshifting damn slow in the P4. Suddenly all of these 'optimizations' that were killer on a P3 or Athlon were slow as sin on a P4. :eek: So to fix that for a P4 meant ditching those optimizations that made the P3 and Athlon code fast.

And that's not even counting the simple optimizations done by organizing your code to maximize the use of a CPU using a profiler. Reorganizing code to keep the instruction units busy can really speed up a program. But the differences between the P4 and, well, anything else are so dramatic that optimizing in this manner for a P4 makes the code slower on everything else. Where as AMD, and even VIA, have kept their CPUs so much like the P3 that optimizing in this manner for these chips works out quite well for all chips ... except the P4.

And again, the only way to get these optimizations in for everyone is to branch code and create a maintanance nightmare. Which is why it's just typically not done.

So sorry, ltcommander_data, but you really just don't know what you're talking about here.
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
ltcommander_data wrote:
specifically the pipeline increase
Actually, that had a pretty minor effect. I'm not even sure if I'd call that a bad decision. It was really what Intel did to Scotty's cache latency and misprediction handling that screwed Scotty badly. The pipeline increase didn't help, but it's far from the major contributor to Scotty's suckiness.
Well I was mainly referring to Prescott's heat and scaling problems being more due to its architecture than the 90nm process.

So there really was something wrong with the 90nm process itself. That something is leakage. And that something will get worse and worse as each process gets smaller. This however is balanced by adding new things to the process such as strained silicon, low/high-K dielectrics, SoI, internal carbon nanotube heat channels, etc. .13 was the magic number. Everything is downhill from there.

I'm aware that leakage increases with process shrinks, but as you mentioned I'm comparing in balance. As long as the processor architecture used can take advantage of features in the 90nm process which can reduce leakage, its isn't a disaster compared to 130nm. Afterall, unlike Prescott which uses strained silicon to increase transistor performance at the expense of leakage, Dothan uses strained silicon to maintain transistor performance while decreasing leakage.

If you're talking about just optimizing for instruction sets, then not so much.

What I'm talking about is instruction sets. As you mentioned, implementing SSE3 support is obviously a waste of time since the Pentium IV has only supported it for less than 2 years, AMD for less than a year, and the Pentium M won't support it until Yonah is released on New Years Day.

However, I'm mainly just referring to the original SSE. Many games still only use FPU code even though even AMD wants the industry to adopt SSE code. It's been around since 1999 so support is not a concern as all processors that meet the system requirements to play current games have it. If all processors support SSE, there isn't a need to create multiple branches to support multiple instruction sets as you mentioned. AMD processors have an efficient SSE implementation, and Intel processors seem to process SSE instructions better than FPU instructions, so incorporating SSE in addition to FPU seems to offer free performance benefits to everyone.
 

Redspade

Distinguished
Dec 11, 2005
19
0
18,510
I think AMD is going in the right direction..... as Intel thinks by going lower watts and adding a ton more cores will help them sell more CPU's.... I still think AMD's new CPUs will over dominate Intels new line-up. I think AMD is letting Intel release there news and get going on all there new chips before they bush wack them again! with there new CPU's all's I have to say is FX 60 and AMD Athlon X2 5000+

More cores is the way to go forsure. Just more can be done at once. There is only so much you can do with single cores. In the future that is. The more cores the better intel is choosing the right track this time. AMD will have to follow suit.
 

Atolsammeek

Distinguished
Dec 31, 2007
1,112
0
19,280
Well two cpu computers been around for years. Just not on the same die. I m not saying two cores will be bad. But I would suggest to people to wait and see what happens. Like with Operating systems going from 16 to 32 bit. Now There changing over to 32 bit to 64 bit.

1. This will Save us money on buying CPUs.
2. You have a faster CPU When Dual core cpu or quad core cpus are use more.
3. They will be cheaper.
 

sulphurious

Distinguished
Dec 11, 2005
7
0
18,510
amd designed the a64 for multi core

This is absolutely true. Why do you think that amd pushed for the 940/939 sockets? Mostly for the on-die mem controller. But they could already do that with the 754's.

I really wish I had this link, but I once read an article on the (at the time) new A64 architecture, and they were intriqued by how they saw a "hole" which they said may be in the future used for more physical cores.

Also, I think that it is worth mentioning that neither Intel nor AMD has actually hit a clock speed ceiling. I think that there is a speed ceiling now because of the amount of leakage and heat that both companies are facing, however I may be wrong. In the old days, cpus never had heatsinks. They never needed them. After bumping up the clock speeds, thats is when heat became an issue. Die shrinks always help, but leakage gets worse every time. Back then die shrinks were able to keep up with heat. Now, its a bit different because the die shrinks only shrink a small fraction of what they used to. 180nm to 130nm and 90nm to 65nm are the same percentage shrink, but also different actual shrinks. This may be the reason why heat was able to catch up. I think that as these companies realize that heat and power consumption must be kept low, they will soon realize that the clock ceiling jumps once again. I believe that by the 22nm (3 shrinks after 65nm) process, clock speeds will be at or around 6 GHz
 

julius

Distinguished
May 19, 2004
168
0
18,680
amd could have kept pushing clock speeds up if it wanted to, it just saw that intel couldnt do anything with prescott and rather designed chips that ran cooler with its very good 90nm process that made intel look even worse. A64 venice or san diego cores actually run cool, even when you overclock them. These cores just dont recieve alot of amps like the prescotts do, thats why you see like 7ghz ocs on prescotts when theyre under extreme cooling, but you cant really push the a64 all that far. socket m2 is not at all about ddr2, thats just basically a smoke screen to shut up people like porkster who bitch about how amd doesnt use the latest tech, socket m2 is going to have a higher tdp, like an earlier anandtech article stated, and thats where revision f is really gonna show its capabilities. I also heard on amdzone, that the tech theyre gonna use for 65nm will first be used for 90nm, and now it also makes perfect sense as to why fab36 isnt starting with 65nm, it will first practice with a more mature process.
 

wolverinero79

Distinguished
Jul 11, 2001
1,127
0
19,280
Intel is a fab powerhouse unfortunately this is a problem with declining sales of cpu's both by market and by AMD taking a chunk. I think Intel should sell manufacturing to other companies since they can supply more than their own chips.

Why would they do that? If you haven't been reading the news, I'll sum it up - Intel is currently fab constrained, especially in the area of chipsets. :eek: They have had to tell customers "I'm sorry, but that's all the chips we have, there aren't any more." A chip Intel can make is a chip Intel can sell and their profit margins are definitely trending up. I doubt they could make nearly as much on another company's chips and without any excess capacity to sell, this is pretty much a non-issue.