I'm not sure why you're all picking on ltcommander_data. He makes some fair points. I think you folks are just trying to pick a fight with him or something, because so far I haven't seen anything that has deserved the treatment he's gotten.
1) For example, he wonders, and quite fairly I might add, if the on die memory controller adds latency to things like onboard graphics. The reasoning is rather obvious. The ODMC adds extra steps into the process.
NB: PCI DMA -> NB -> MEM -> NB -> PCI DMA
OD: PCI DMA -> NB -> CPU -> MEM -> CPU -> NB -> PCI DMA
The question is not if the ODMC adds more steps, because it clearly does. The question is do these extra steps actually cost anything in terms of latency? Given the PCI bus speed, I doubt it will there for DMA access. For faster busses however (such as graphics) it <i>may</i>. Then again, it may not. Some tests would be nice to prove this one way or the other.
He hasn't said that AMD suxors. He hasn't said that Intel rules. He's merely observed the extra steps and wonders what impact that may actually have. As we've seen with how early AMD HTT tests affected AGP, sometimes these little extra steps that seem harmless aren't so insignificant after all. Even the two versions of PCI busses running on the same system have caused some weird latecy problems. Not that these kinks can't be worked in the end, but that they're design considerations that should be looked at closely.
Though, in truth, this is all rather a moot point IMHO since people with onboard graphics aren't exactly concerned with performance anyway, but still an interesting technical query.
2) As for his talk about memory bandwidth and latency issues, while I don't argue that ODMC has lower latency and thusly is better, I think it's hard to argue that Intel will actually suffer if what he says is true about Intel moving to a quad-channelled architecture. This would not only increase the bandwidth needed for extra cores dramatically, but also decrease the memory latency in the same manner that dual-channel memory did. Sure it'll cost. Sure, it'll be a pain to implement on a motherboard. (Which will in turn cost yet more.) But Intel is hardly known for being cheap. **LOL**
Anywho, aside from the possibly unfair ridicule of ltcommander_data...
Personally, I think Xeon's biggest problem in competition will be that it'll still probably use a slower FSB than an equal P4, and thus be crushed by Opteron there ... same as always really. AMD concentrates on making their server CPUs their best and their desktop CPUs their second best. Intel concentrates on making their server CPUs their second best and their desktop CPUs their best. Intel can't compete soundly against AMD in the server market until they change that one simple strategy to match AMD's.
I also think AMD's lower latency from ODMC benefits their core well because their prefetch isn't stunning. Intel's prefetch however, being fairly good, means that Intel will gain less from an ODMC. That doesn't mean that Intel won't benefit, but so long as Intel sticks to the Netburst architecture, the gain may just not be worth the resources to implement. Either way, Intel certainly won't gain as much as AMD did when (if) they add ODMC to Netburst, so it's not really fair to say that this is holding Intel back all that much.
I think it's a shame that Intel's replacement of Netburst isn't going better. Perhaps they should have just stuck with the P3 all along, but even then, Netburst was an interesting attempt. It likely will fail in the long run, but even then I have my doubts that the failure is in Netburst itself. If you look at all of the hacks Intel put into Scotty to allow it to scale higher (which as we see from Scotty's thermal problems was a stupid decision to make) then you see the death of the P4. However, had Intel actually worked on improving Northwood and just used SoI like AMD did, we'd probably see Netburst thriving well for years to come.
Netburst was about redesigning the CPU to be more virtual. It had a lot of possibility. But the architecture required such a shat load of logic and transistors to overcome the performance losses from it's complexities that it was barely better than a simple design because of the power usage and thermal output. Sure, one could just stick to that simple design, like AMD did. (Hell, like Intel even did for their mobile segment.) But there were theoretical design benefits that even Northwood never got to take advantage of because Intel just never took the architecture far enough. If you look at the original specs for Netburst before Willy's cut-down version, you'll see that Intel could have taken it a lot further. Though I think Transmeta was actually heading in a better direction. Had they Intel's resources and ambition, I doubt that neither AMD nor Intel would even exist in the CPU market today.
But anywhy, I think that while Intel has seen better days, I'm really not seeing anywhere here where things will change in any significant way. It'll still remain the same situation as always as far as I can tell. AMD won't trounce Intel, and Intel won't trounce AMD. It's the same old stalemate. Technology improves, but still, there's really nothing new. It's kind of boring actually. :\
Of course, Intel's biggest downfall is their idiot managers making all manner of stupid decisions. And AMD's biggest downfall is their fear to succeed. **ROFL** Neither company will get anywhere with what they have unless they can overcome themselves first. The race to crush the other won't be won by who's chips are better, but about which company can overcome their own internal problems first.<pre><font color=orange> ∩_∩
Ω Ω
(=¥=)</font color=orange> - Cedrik says that anyone who groups M$ with Commodore will suffer<font color=orange>
_Ū˘Ū_</font color=orange> a fate similar to Bunascii.</pre><p>

یί∫υєг ρђœŋίχ
The <b><font color=red>Devil</font color=red></b> is in my <b><font color=red>'98 Mercury Sable</font color=red></b>!
<b>@ 201K miles!</b>