slvr_phoenix
Splendid
Thank you for being able to DEBATE the subject matter INTELLIGENTLY. After TheAudiophile's garbage spewing nonsense, I was afraid the whole art discussion had died out entirely.
I know I flamed TAP pretty hard, but if anyone ever deserved it...
"Has anyone seen a p3 1.13? An intel video card? Yes even some intel proucts fail. But you are both right the p4 will not disapear more like evolve."
You bring up a well sounding argument, but the points aren't as valid as you weigh them. The 1.13GHz P3 never officially made it to market. What very few that got sold were recalled. Because of this, it should be impossible to buy one today or ever after. So of course a chip that isn't ever sold anymore and was recalled is going to vanish. That's true of any product under those circumstances with the rare exception of the few that become collector's items.
So it's still a good argument, but not a strong one for the vanishing P4's case. But then you even admit that the P4 won't vanish, so you already agree with that anyway.
As for Intel video cards ... Intel has tried to branch out into a lot of things. They really shouldn't because they just don't do well in areas that they don't belong. They should concentrate on the few things that they are good at and slowly disolve/sell the rest.
Besides, only then can they compete soundly against AMD anyway because then they wouldn't be wasting resources in losing battles.
"However it is also possible that adding the missing fpu and missing cache may negatively effect its scalability as well....time will tell."
It is a possability, but it isn't a likely one. From everything I have read, the original P4 specifications had these things in. It was Intel's adamant decision to stick with a specific die size that forced them to cut pieces off. There is nothing to support that they were removed for scalability purposes.
It reminds me in many ways of the 486 SX vs. DX. Intel made 486s without internal math co-processors. They bit when it came to hard calculational power. And there was nothing lost by putting the math co-processor in the die itself. Intel just didn't do it that way at first because of die size limitations in their production that they eventually overcame with a smaller etching process.
It's an amazingly similar case to the P4. It didn't hinder scalability then, and it's not likely to do so now. If anything, I believe it would make the P4 more scalable to have their FPU better match the ALU performance.
"I find humor in your Rdram to DDR Ram cost comparison. You are comparing a product that has been out only a couple months to a product that has been on the market over a year and a half. Once DDR Ram becomes more readily available it will come down to a price that is considerably cheaper then RDRAM. "
I find great humor in your statement as well. Did you know that DDR SDRAM has existed for years? It has been used in video cards and other electronic devices. It took so long to go into PCs as system memory because everyone was trying to standardize the implementation so that there wouldn't be compatability problems. Yet the DDR technology has existed for MUCH longer than the short period it has been available for PC MEMORY.
As it is being put into PC memory, the technology is already quite mature and even nearing the end of it's growth potential. What we see going into PCs as DDR SDRAM is a very mature product, only put to a new use.
Furthermore, DDR SDRAM is inherantly based on SDRAM. And that is a technology that has existed for a VERY long time. It is long due for a serious improvement. More so than just a double or quadruple data rate of the same basic core technology.
Why, it's almost as old as EDO memory when you really think about it.
Meanwhile RDRAM has been hardly used for anything. It has existed even longer than DDR SDRAM, but for the longest time it had only been used in devices such as game consoles. And during that time, there was no need to advance the technology any because once a console is designed and produced, it is never fundamentally upgraded. It was only once it was applied to use in PCs that further refinement was started on the technology.
"I would concur on your evalutation of SSE2. Thank god for all of use AMD and Intell will both use it."
Agreed. We have long needed optimizations for chips that are accepted by ALL PC chip manufacturers. MMX was such a wonderful thing. Then came the SSE1 vs. 3DNow! battle. That hurt the PC world in a lot of ways. Now AMD and Intel both plan to use SSE2 and we can see software commonly optimized again.
It is funny to hear over and over how AMD activists protest against SSE2 as though it were an Intel only technology. Perhaps they do not realise that AMD has just as many plans to use it. They call Intel cheaters when a benchmark contains SSE2 optimizations. Yet the won't admit that by that logic AMD has to be called cheaters as well when their Hammer chips are released.
-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
I know I flamed TAP pretty hard, but if anyone ever deserved it...
"Has anyone seen a p3 1.13? An intel video card? Yes even some intel proucts fail. But you are both right the p4 will not disapear more like evolve."
You bring up a well sounding argument, but the points aren't as valid as you weigh them. The 1.13GHz P3 never officially made it to market. What very few that got sold were recalled. Because of this, it should be impossible to buy one today or ever after. So of course a chip that isn't ever sold anymore and was recalled is going to vanish. That's true of any product under those circumstances with the rare exception of the few that become collector's items.
So it's still a good argument, but not a strong one for the vanishing P4's case. But then you even admit that the P4 won't vanish, so you already agree with that anyway.
As for Intel video cards ... Intel has tried to branch out into a lot of things. They really shouldn't because they just don't do well in areas that they don't belong. They should concentrate on the few things that they are good at and slowly disolve/sell the rest.
Besides, only then can they compete soundly against AMD anyway because then they wouldn't be wasting resources in losing battles.
"However it is also possible that adding the missing fpu and missing cache may negatively effect its scalability as well....time will tell."
It is a possability, but it isn't a likely one. From everything I have read, the original P4 specifications had these things in. It was Intel's adamant decision to stick with a specific die size that forced them to cut pieces off. There is nothing to support that they were removed for scalability purposes.
It reminds me in many ways of the 486 SX vs. DX. Intel made 486s without internal math co-processors. They bit when it came to hard calculational power. And there was nothing lost by putting the math co-processor in the die itself. Intel just didn't do it that way at first because of die size limitations in their production that they eventually overcame with a smaller etching process.
It's an amazingly similar case to the P4. It didn't hinder scalability then, and it's not likely to do so now. If anything, I believe it would make the P4 more scalable to have their FPU better match the ALU performance.
"I find humor in your Rdram to DDR Ram cost comparison. You are comparing a product that has been out only a couple months to a product that has been on the market over a year and a half. Once DDR Ram becomes more readily available it will come down to a price that is considerably cheaper then RDRAM. "
I find great humor in your statement as well. Did you know that DDR SDRAM has existed for years? It has been used in video cards and other electronic devices. It took so long to go into PCs as system memory because everyone was trying to standardize the implementation so that there wouldn't be compatability problems. Yet the DDR technology has existed for MUCH longer than the short period it has been available for PC MEMORY.
As it is being put into PC memory, the technology is already quite mature and even nearing the end of it's growth potential. What we see going into PCs as DDR SDRAM is a very mature product, only put to a new use.
Furthermore, DDR SDRAM is inherantly based on SDRAM. And that is a technology that has existed for a VERY long time. It is long due for a serious improvement. More so than just a double or quadruple data rate of the same basic core technology.
Why, it's almost as old as EDO memory when you really think about it.
Meanwhile RDRAM has been hardly used for anything. It has existed even longer than DDR SDRAM, but for the longest time it had only been used in devices such as game consoles. And during that time, there was no need to advance the technology any because once a console is designed and produced, it is never fundamentally upgraded. It was only once it was applied to use in PCs that further refinement was started on the technology.
"I would concur on your evalutation of SSE2. Thank god for all of use AMD and Intell will both use it."
Agreed. We have long needed optimizations for chips that are accepted by ALL PC chip manufacturers. MMX was such a wonderful thing. Then came the SSE1 vs. 3DNow! battle. That hurt the PC world in a lot of ways. Now AMD and Intel both plan to use SSE2 and we can see software commonly optimized again.
It is funny to hear over and over how AMD activists protest against SSE2 as though it were an Intel only technology. Perhaps they do not realise that AMD has just as many plans to use it. They call Intel cheaters when a benchmark contains SSE2 optimizations. Yet the won't admit that by that logic AMD has to be called cheaters as well when their Hammer chips are released.
-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.