Second-class Intel to trail AMD for years

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
So says The Register. :)

Read <A HREF="http://www.theregister.co.uk/2005/10/29/intel_xeon_2009/" target="_new">here</A>

I'm really sorry for Intel, but they diserve it for being arrogant and foolish...

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
 

dhlucke

Polypheme
I think Intel would do much better if they simply came up with a naming scheme that wasn't in code.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
 

dhlucke

Polypheme
I disagree. There's nothing particularly wrong with most of their processors. It's just that the competition is better for most people.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
 
The P4 was a huge step backwards in IPC. Intel remade the PIII to a similar degree that AMD remade the Athlon, now we have Athlon 64's and Intel Pentium-M's with high IPC...and the P4 looks worse than ever.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 

dhlucke

Polypheme
I'm not buying a P4 either. But I'm also not even looking at the P4 articles since their naming scheme makes it very annoying to read.

Eventually the OEM's will figure out what's going on and will start switching over to AMD.

You have to realize though that someone who keeps upgrading within the Intel family of processors is probably happy since to them they are getting an increase in performance. The problem is those that go from Intel to AMD and then try to go back to Intel. That's not fun.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
 
yes, I have an AMD system for benchmarking, and I can't even use it because I need the parts for that purpose.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
if by second class you mean making a ton more money then yes you are correct.
It doesn't bother if Intel makes zillion of dollars if they don't know how to use it the right way.

Just look at their roadmap: Delays, cancelled processors and the list keeps growing (don't even mention their actual offerings which are no competition to AMD's processors).

AMD being a smaller company knows how to spend money in a wise way.

What Intel is doing with all that money...
One has to wonder ;)

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU<P ID="edit"><FONT SIZE=-1><EM>Edited by Bullshitter on 10/30/05 02:33 PM.</EM></FONT></P>
 

Action_Man

Splendid
Jan 7, 2004
3,857
0
22,780
0
Just look at their roadmap: Delays, cancelled processors and the list keeps growing (don't even mention their actual offerings which are no competition to AMD's processors).
Isn't their roadmap merom, conroe and woodcrest? The only delays and whatnot are the crappy xeons and Itaniums.

Some people are like slinkies....
Not really good for anything but you cant help smile when you see one tumble down the stairs.
 

Era

Distinguished
Apr 27, 2001
505
0
18,980
0
Well, Intel can blunder a lot without getting really hurt. They are that rich.
They have monopoly in CPU/Chip-set business globally and they think
they can fuckup in about anything without getting caught.

If in trouble, Intel drops a few GaZillions bucks more to the marketing dep, and pisses on the R&D dep, and orders for more champagne for their owners for making such a brilliant decision.
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
0
-Warning Long Post-

Intel’s roadmap actually isn’t too bad. While the delay of the integrated memory controller is a set back, it probably isn't as catastrophic as it appears.

Intel has been taking a lot of flak lately on their new Paxville DP. In this case, it is deserved. They decided to place 2 Prescott 620s together. The crazy heat production is directly due to the presence of not only HT in each processor but also an extra 1MB of L2 cache. Intel probably felt that the 1MB of extra cache per core was more worthwhile in a server environment than a 400MHz increase in clock rate, which is why they didn’t just use a 840EE. In the end, the 90nm process simply couldn’t handle dual core HT enabled processors with 2MB of cache per core. The lower performance compared to Opteron is due to the low clock speed of 2.8GHz and the bottleneck of 4 cores sharing a 800MHz FSB.

These problems will be greatly reduced once Dempsey and Bensley arrive. Dempsey will probably be closely related to Presler, meaning speeds of up to 3.46GHz HT enabled with 2MB of L2 cache per core. Higher clocked speeds are likely possible as a 3.4GHz 950 was shown to fit within the thermal and power envelop of a 2.8GHz 820. The higher clock speed will help, but the main benefit is the 1066MHz FSB. The 33.3% increase in bandwidth will satisfy Core-to-Core cache transfers while opening up more throughput to the RAM. Even more important is the addition of individual 1066MHz FSB pipes like what AMD has to ensure the processors don’t compete. In addition, the RAM speed has increased from 400MHz to 533MHz and is now quad-channelled. This means that total FSB bandwidth has nearly tripled from the 6.4GB/s in Paxville between 4 cores, to 17GB/s. Memory bandwidth has likewise tripled from 6.4GB/s to 17GB/s. Even a dual processor Opteron system only has 12.8GB/s of memory bandwidth available total. Dempsey and Bensley should certainly make Intel highly competitive with AMD.

Now to address the 4-way server market. While an integrated memory controller would provide better memory bandwidth scaling with additional processors, Intel’s current FSB architecture could easily be expanded to provide much of what’s required. Currently Intel’s Xeon MPs use a 667MHz FSB. Intel is already working on a 1333MHz FSB for Woodcrest, and the application of such a bus would double the available bandwidth. Of course, on the motherboard side, each processor would have an independent FSB to reduce congestion. Memory bandwidth would likewise see an increase from the current 400MHz to 667MHz in a quad-channel configuration. These improvements are easily made and will keep Intel competitive in the near-term.

One of the major improvements with the use of an integrated memory controller is the reduction in latency. The high latencies on Intel’s current systems is partially due to the memory running asynchronously with the FSB. This is corrected in Bensley where 533MHz RAM is matched with a 1066MHz FSB. By working synchronously, some of the latency issues will be reduced. Similary, Xeon MPs working with a 1333MHz FSB will run synchronously with 667MHz RAM. In addition, an advantage that Intel has over AMD is that they design their own chipsets. If they spent the effort, they could easily streamline the CPU-Northbridge-RAM interconnects to reduce latency.

All these are just simple improvements in the buses that will help improve Intel’s performance. Intel’s next-generation architecture isn’t even mentioned, but Conroe, Meron and Woodcrest are certainly something to look forward too. All in all, the delay of an integrated memory controller isn’t a catastrophe to Intel’s roadmaps.
 

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
-Warning Long Post-

Intel’s roadmap actually isn’t too bad. While the delay of the integrated memory controller is a set back, it probably isn't as catastrophic as it appears.

Intel has been taking a lot of flak lately on their new Paxville DP. In this case, it is deserved. They decided to place 2 Prescott 620s together. The crazy heat production is directly due to the presence of not only HT in each processor but also an extra 1MB of L2 cache. Intel probably felt that the 1MB of extra cache per core was more worthwhile in a server environment than a 400MHz increase in clock rate, which is why they didn’t just use a 840EE. In the end, the 90nm process simply couldn’t handle dual core HT enabled processors with 2MB of cache per core. The lower performance compared to Opteron is due to the low clock speed of 2.8GHz and the bottleneck of 4 cores sharing a 800MHz FSB.

These problems will be greatly reduced once Dempsey and Bensley arrive. Dempsey will probably be closely related to Presler, meaning speeds of up to 3.46GHz HT enabled with 2MB of L2 cache per core. Higher clocked speeds are likely possible as a 3.4GHz 950 was shown to fit within the thermal and power envelop of a 2.8GHz 820. The higher clock speed will help, but the main benefit is the 1066MHz FSB. The 33.3% increase in bandwidth will satisfy Core-to-Core cache transfers while opening up more throughput to the RAM. Even more important is the addition of individual 1066MHz FSB pipes like what AMD has to ensure the processors don’t compete. In addition, the RAM speed has increased from 400MHz to 533MHz and is now quad-channelled. This means that total FSB bandwidth has nearly tripled from the 6.4GB/s in Paxville between 4 cores, to 17GB/s. Memory bandwidth has likewise tripled from 6.4GB/s to 17GB/s. Even a dual processor Opteron system only has 12.8GB/s of memory bandwidth available total. Dempsey and Bensley should certainly make Intel highly competitive with AMD.

Now to address the 4-way server market. While an integrated memory controller would provide better memory bandwidth scaling with additional processors, Intel’s current FSB architecture could easily be expanded to provide much of what’s required. Currently Intel’s Xeon MPs use a 667MHz FSB. Intel is already working on a 1333MHz FSB for Woodcrest, and the application of such a bus would double the available bandwidth. Of course, on the motherboard side, each processor would have an independent FSB to reduce congestion. Memory bandwidth would likewise see an increase from the current 400MHz to 667MHz in a quad-channel configuration. These improvements are easily made and will keep Intel competitive in the near-term.

One of the major improvements with the use of an integrated memory controller is the reduction in latency. The high latencies on Intel’s current systems is partially due to the memory running asynchronously with the FSB. This is corrected in Bensley where 533MHz RAM is matched with a 1066MHz FSB. By working synchronously, some of the latency issues will be reduced. Similary, Xeon MPs working with a 1333MHz FSB will run synchronously with 667MHz RAM. In addition, an advantage that Intel has over AMD is that they design their own chipsets. If they spent the effort, they could easily streamline the CPU-Northbridge-RAM interconnects to reduce latency.

All these are just simple improvements in the buses that will help improve Intel’s performance. Intel’s next-generation architecture isn’t even mentioned, but Conroe, Meron and Woodcrest are certainly something to look forward too. All in all, the delay of an integrated memory controller isn’t a catastrophe to Intel’s roadmaps.
After reading <A HREF="http://theinquirer.net/?article=27317" target="_new">this</A>, I'd like to know if you are still optimistic about Intel's upcoming processors.

After all, with its well-know NIH (Not Invented Here) policy, Intel rarely took something from the market that it didn't develop itself, even when it was both technically superior and a good business decision.
This clearly backup what I've said about Intel being an arrogant and foolish company.

Also, you'll have to remember that AMD isn't quiet either.
For the time Intel release their "flagship" processor, AMD will be releasing their much improved quad core processor in 65nm process, extensions to the AMD64 instruction set, all this paired up with Hyper Transport 3.0. and a new multimedia instruction set to boost applications like 3D rendering and audio/video encoding. :D

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU<P ID="edit"><FONT SIZE=-1><EM>Edited by Bullshitter on 10/30/05 09:52 PM.</EM></FONT></P>
 

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
<A HREF="http://www.theregister.co.uk/2005/10/28/intel_whitefield_india/" target="_new">Here's</A> more info to backup what I've said about a troubled company called Intel...

While stunning in its own right, Intel's cancellation this week of the multicore "Whitefield" processor stands as a more significant miscue that simply excising a chip from a roadmap. Whitfield's disappearance is a blow to India's growing IT endeavors.

Originally discovered by The Register, Whitefield stood as a major breakthrough for Intel and its Indian engineers. The much-ballyhooed chip would combine up to four mobile processor cores and arrive in 2007 as the very first chip designed from the ground up in India. In the end, engineering delays and a financial audit scandal killed the processor, leaving Intel to develop the "Tigerton" replacement chip here and in Israel.

El Reg has discovered that Srinivas Raman, former general manager of Intel India's enterprise products group, left the company in early August and joined semiconductor design tools maker Cadence - the home of former Intel global server chip chief Mike Fister. Raman declined to return our phone calls, but insiders confirm that he was the lead of the Whitefield project. The executive became distressed about the project when Intel's audit resulted in close to 50 of his staff being let go from the company, one source said.

Of the 50 staffers, close to 20 of them were sent to India from Portland in 2001 to work on Whitefield. The cancellation of the project has since resulted in much of the work being sent back to Portland.

Whitefield had been meant to serve as Intel's most sophisticated response to the rising multicore and performance per watt movements. The company has fallen well behind rivals IBM and Sun Microsystems on such fronts in the high-end server market and behind AMD in the more mainstream x86 chip market. The Whitefield chip was designed to give these competitors a real run for their money as it made use of Intel's strong mobile chip technology to deliver a high-performing product with relatively low power consumption.

Instead of wowing customers, Intel has disappointed them and created a painful situation for its India staff.

Local paper The Times of India commented this week on the situation.

"India's ambitions of emerging (as) a global chip design and development hub has just suffered a big knock," the paper wrote. "Intel has killed its much-hyped Whitefield chip, a multicrore Xeon processor for servers with four or more processors that drew its name from Bangalore's IT hotspot, Whitefield, and which was being developed almost wholly in this city.

"Intel had invested heavily in the project, both in infrastructure and people, drawing in some of the brightest talents. Some 600 people are said to be employed in the core hardware part of the project."

Chip staffers in India currently fear losing their jobs and morale is very low as a result of the Whitefield cancellation. Many of the staffers had only been told that Whitefield would be delayed by six to nine months. They learned of the project's end in the press.

The difficulties here show how complex global operations can be with sophisticated products. India hoped to take on more and more of Intel's design work, but such plans look iffy now to say the least.

These disruptions hurt Intel during a very difficult period for the company. It had appeared that Intel managed to correct the chip delay issues and strategy mistakes that plagued it during 2004. Instead, the company this week delayed work on both its Itanium and Xeon lines, giving AMD a chance to take even more market share from the giant.

Intel declined to comment for this story

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
0
Sadly, that article was posted at 6:53PM so I just missed it by 11 minutes.

I agree that an integrated memory controller is preferable, but Intel can still remain competitive despite the delay. By H2 2006, all Intel's processors will have shared L2 caches in addition to direct L1-L1 interconnects. This will eliminate the need to go through the FSB for cache transfers, freeing up bandwidth for other tasks. The use of shared L2 caches is superior to AMD's current implementation of a Crossbar as no L2-L2 transfer needs to take place therefore eliminating any bandwidth or latency issues that exist despite being within the CPU.

The use of a shared L2 cache also means that data does not need to be repeated. This is one aspect where Intel's architecture is superior to AMD's. While AMD only uses 4-way associativity in their L2 cache, Intel uses 16-way. This means that more of the cache can be accessed at a time. Therefore, information only needs to appear once in a larger L2 cache and can be accessed by both processors instead of being repeated twice in two smaller caches. While this obviously reduces latency as mentioned before, the key thing is it makes a 4MB L2 cache store more information than a 2x2MB cache with Crossbar does by eliminating duplicates. Making L2 cache space go farther in holding data helps Intel reduce constraint of FSB bandwidth as L2 cache hits increase, thereby decreasing the need to access the RAM.

The elimination of the need to use the FSB for cache-to-cache transfers and the higher hit rates in a shared L2 cache means that that the 10.6GB/s bandwidth of each 1333MHz FSB (of which each processor has its own) just may be sufficient. If it isn't, Intel could simply augment a large 16MB shared L2 cache with a 16MB shared L3 cache. The additional cache would further decrease the need to access the RAM and ensure the FSB bandwidth goes further. With a 65nm process, the die size of such a behemoth is probably the same as the current 90nm Xeon MPs that integrate 8MB of L3 cache, so it isn't too unreasonable. In addition, with the pipeline reduction of Intel's next-generation architecture, and the inherent improvements in Intel's 65nm process, adding more transistors for the L3 cache wouldn't jump the power consumption too high. Certainly, it would still be cooler than Intel's current Xeons. The use of sleep transistors in the 65nm process also means the L3 cache could be shut down when not needed further alleviating concerns of crazy heat or power consumption.

In regards to the implementation of HyperTransport in Intel processors, seeing that there is a set time to get a new processor to market, whether Intel decides now to design an entirely new processor to accomodate HT or they stick through their own technology's teething problems, the end result would be a similar time-to-market. It would therefore be best to stick with Intel's own technology, especially if they feel that its potentially superior.

On a side note, I would be interested to see what becomes of Intel's attempt to integrate a northbridge and a voltage regulator into a processor. It is works, this will probably do AMD one better since latency and bandwidth issues would be almost nonexistant. Of course this would add to the price, but it may work for high-end products since they are normally coupled with Intel's best chipset anyways (ie. 955EE Presler and the 975X chipset). I believe Intel was looking for an introduction sometime at the end of the decade. With the use of the 45nm process or something smaller, everything would fit nicely and run cool.

While integrated memory controllers are good for high-end computers, I'm curious as to their effects on low-end systems. Since the RAM needs to be accessed through the processor, I wonder what the effects are to integrated, TurboCache, and Hypermemory graphics cards? The latency will obviously be higher. The problem would now have switched from the processor fighting the addons for memory bandwidth to the addons fighting the processor. I'm sure this will also have an effect on sound cards especially with multichannel and digital sound becoming popular as these requirement quite a bit of memory. This is probably why Creative has taken it upon themselves to alleviate the problem by integrating large amounts of RAM into their high-end sound cards. RAM-graphics card latency issues associated with integrated memory controllers will probably be more pronounced once Windows Vista is released. Since Microsoft recommends at least 512MB of video cache to run with all the visual effects activated, even the most high-end graphics cards today are lacking. Even a 256MB 7800GTX may need to access the RAM through the PCIe bus, through the chipset, through HT, through the processor, through the memory controller, then through the memory bus then to the RAM. How much effect this has compared to going directly through a northbridge based memory controller remains to be seen. Most likely the latency issue is small, but this is the core OS. Even small latencies in the OS will filter through and magnify as applications are run.
 

endyen

Splendid
May 19, 2003
8,161
0
30,790
2
How much effect this has compared to going directly through a northbridge based memory controller remains to be seen
Say What? Your grasp of Intel's roadmap is tenuous at best, but to suggest that the NB memory controler works at the beck and call of the graphics card is just too wrong.
Come back when you have some grasp of reality.
 

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
The use of shared L2 caches is superior to AMD's current implementation of a Crossbar as no L2-L2 transfer needs to take place therefore eliminating any bandwidth or latency issues that exist despite being within the CPU.
Ohh, please...
I've read many articles that can prove you're wrong on this. In deed, a shared L2 cache is not so efficient as many thought to be (it all depends on the architechture). That's the reason why AMD doesn't needs a shared L2 cache thanks to the MOESI implementation.

So, do you believe that large caches are a feature in a processor???

Opteron/Athlon 64 doesn't needs large amounts of L2 or L3 caches because it doesn't suffer from badwidth limitations as Intel does. When I see Intel's roadmap, all I can see is processors with 8MB and 16MB of cache. This is a sign that the processor is starving for data. In conclusion, large caches are a sign of a flawed architecture. :D

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
 

rettihSlluB

Distinguished
Jun 5, 2005
296
0
18,780
0
I found this jewel at <A HREF="http://www.informationweek.com/blog/main/archives/2005/10/intel_selfdestr_1.html" target="_new">Information Week</A>

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
 

tluxon

Distinguished
Nov 2, 2002
227
0
18,680
0
This is a pretty confusing thread, because there seems to be so much bias to filter through in many of the posts and linked articles.

I don't feel an allegiance to either AMD or Intel but I also haven't judged either company by whether I agree with their business practices or not.

It's my observation that Intel has made a few "misteps" in the last couple years, but I've seen that all companies have their good years and their bad years. Based on available capital, manufacturing capability, and cooperative agreements, it sure looks to me like it's way to early to call the race and figure Intel should throw in the towel any time soon.

AMD has come a long way toward leveling the playing field, but from what I can tell they've got some challenges ahead of them as well. First, as demand continues to grow as is expected, their manufacturing and distribution channels will need to expand accordingly. There's a long list of companies who have been tripped up at that stage, so I think it's still a too early to declare that AMD's going to pull it off without a hitch of their own.

Near as I can determine, AMD and Intel each offer advantages to their niche customers that give them a leg up on the other. AMD appears to have an upper hand in the center performance-wise, but I think it remains to be seen if the market will reflect that edge. I have plenty of friends who recognize that an AMD platform is faster and runs cooler than the comparable Intel counterpart, yet maintain an adherence to Intel based products. That kind of loyalty can be pretty tough to erode, so I don't hear the fat lady warming up quite yet.

Keep sharing the articles - this could be interesting for some time.
 

endyen

Splendid
May 19, 2003
8,161
0
30,790
2
Most of us here know that Intel will be back. I, and many others look forward to it. After all, the consumer is better served when there is competition.
Being a part of the market, and not a shareholder, I dont care much who is top dog. I do find it "interesting" though, that people would sacrifice their needs, to maintain a loyalty to a company. This is especially true, when that company is as large, and selfserving as Intel. Amd may not be as large, but a blind loyalty there, would be just as misguided.
Personnally, I would love to see Intel use their FD SOI technology. Performance has been advancing a little slow for my liking.
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
0
Obviously a NB memory controller doesn't work at the beck and call of the graphics card, but the point is neither does an integrated memory controller. However, in the case of a NB memory controller the graphics card can communicate with the RAM directly through the NB. In the case an integrated memory controller, at least one more step is added with the graphics needing to communicate with the chipset then the memory controller through the HT link. The fact I'm pointing out is that while an integrated memory controller decreases RAM access latency for the processor, it increases RAM access latency for every other component.
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
0
You are right, it depends on the architecture as everything always does. In this case, Intel is designing the architecture around a shared L2 cache. The concept is being introduced in Yonah, which in itself while building on the Pentium M architecture is quite different from Dothan. Conroe, Meron, and Woodcrest are almost a completely new architecture as is combines Netburst with Pentium M. The use of the shared L2 cache will be ingrained in this architecture by adding new algorithms for L1-L1 and L1-L2 transfers. As well, prefetch logic which is already advanced in the Pentium M and Prescott (one of the few things Prescott is good at), will be further improved. In the end, these features will benefit the processor and reduce FSB bandwidth demands.

In terms of Intel's need for L2 cache, the reason is due to its inclusive architecture. AMD has long used exclusive cache. As such, AMD processors due not benefit much from have large cache sizes. More specifically, will increases in L1 cache size may show larger performance gains in an exclusive architecture, increases in L2 cache does does not. Conversely due to its inclusive nature, Intel processors can show larger performance increases from increasing the amount of cache. Specifically, increases in L2 cache size show larger increases in performance. This nature is why AMD's L1 caches have often been larger than Intel's since that is what they require. AMD's L2 caches are small simply because they are of little benefit even if they were to increase. Intel on the other hand, uses large L2 caches because it allows them to greatly increase performance. The transition from the Prescott 500 series to the 600 series is a bad example since in this case Intel was to lazy to update the caching algorithms to reduce latency and fully take advantage of the additional cache. However, a great example is the transition from Banias to Dothan.

The larger caches in Intel processors are directly due to the architecture's ability to use it to increase performance. This isn't even taking into account the benefit in the reduction to FSB bandwidth. That is just another benefit to Intel's inclusive architecture. Looking deeper, Intel's architecture isn't as flawed as it may appear.
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
0
I'm actually hoping that someone will do a benchmark to investigate. My question from the original post was:

"While integrated memory controllers are good for high-end computers, I'm curious as to their effects on low-end systems. Since the RAM needs to be accessed through the processor, I wonder what the effects are to integrated, TurboCache, and Hypermemory graphics cards? "

From a theoretical standpoint, the latency will be higher. What I'm curious about is what are the actual effects especially since Windows Vista will increase the need for RAM to graphics card transfers.
 

Xeon

Distinguished
Feb 21, 2004
1,304
0
19,280
0
Proved what he links a register link that fortell's doom as it always does. The the thread turns into copy paste opinion and babble.

The fact of the matter is Intel is in no fiscal trouble so the doom can not be soo. Perhaps they will trail for many years its tough to say, really doesnt bother me either way since nothing new interests me.

But hey the registers and mr. bull have forseen the future and know the next 4-7 years of operation for Intel. Lets see if the fortune tellers are correct.

-Jeremy Dach
 

Similar threads


TRENDING THREADS