Musings: The end of Graphics cards?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
lol thanx
and lay off the whacky tobakky :wink:

<b>I am not a AMD fanboy.
I am not a Via fanboy.
I am not a ATI fanboy.
I AM a performance fanboy.
And a low price fanboy. :smile:
Regards,
Mr no integrity coward.</b>
 
NEW NIPPLE ?!? AHAHAHAA>>>.

oh god that was funneh... too bad i was in training and alerted everyone to teh fact taht i was surfing when i wasnt supposed to be :))



hmmm... you say that haveing everything in one processor will create too much complication and heat... . i dont think it will be like this as architectures improve and shrink. heat somewhat has to do with the architecture... but if a processor is being produced on a .005 micron die i doubt that will matter.

perhaps there might be some extra wiring needed to have high fedelity sound.. that would have to do with the analogue output only.

Maybe you can install the sound electronics near the CPU packing the sound capabilities, but how much is there that's similar in a sound card APU compared to the CPU anyways? So going back to installing around the die, doesn't that imply we'll need to pack too much? Imagine surrounding the board itself with the video RAM as well and all graphical components around that central processing unit (granted the term CPU would really mean CENTRAL).
hmmmmmmmmmmmmmm

well. about packing too much... thast hard to say. it all has to do with teh architecture again. theres no reason why a CPU can be tailored to all the tasks that the different components do now.

whats a motherboard? whats a sound card? pcbs that simply creates bottlenecks. whats more effecient, having separate ram for video, system and sound (creating slow downs when the different parts need to share information over different system bus's)... or having it all shared and therefore equally fast to access? of course system ram today is too slow for video cards .. but soon it wont be. dual-DDR P4's can get like 6gigs a second transfer speed. new technologies will probably be more than fast enough with transfer speeds and latency.. AMD knows how important this is because they included the memory controller right on the die. this ALONE increased the speed of the K7 design because of the reduced latency

thats an excellenet example of how having everything on the same die speeds things up. no slow buses to cross, all communication within the die which is teh fastest way. think about how fast the FSB is .. say running at 400mhz. the CPU runs at say 1600mhz. that means for every 4 cycles the CPU goes thru the FSB only does 1 which means that even if the North Bridge is totally ready to recieve a command to fetch somethign from memory, the CPU STILL has to wait a minimum of 4 cycles just to send teh command... thats 25% effeciency. theres also the time it takes the north bridge (where memory controllers normally reside) to actually execute the command and then move the data...

thats alot of wasted time just waiting on componenets that run on slower buses. now imagine if the PCI controller was on die... instead of 33mhz it would run at 1600mhz. thats ALOT of reduced latency.. of course PCI-express will help with somewhat but it STILL is an external bus.. sure it will be incredibly faster than what we have now but compared to having it on die its still slow
-------


<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 
and computers dont have to operate like they do now



the PCI bus is quite old.. and slow. PCI-Express will nice but it will still have only 6gig/sec thru-put (per device? i cant remember)...


how this will make a difference for sound cards and network cards i dont know.. for those things it would seem that latency would matter more than bandwidth because even now teh PCI bus usually isnt saturated even under the most heavy use (ie downloading something, copying a large file and watching a video at the same time would fill the bus rather quick. BUT, most motherboard manufacturers have compensated for this already... like the V-Link architecture on the ESC K7S5A.. or, the nForce2 boards all have custom high bandwidth PCI designs)



again, if you look at AMD they are already addressing this issue by NOT having a Front Side Bus. i was like "wow" when i first read that...... quite a step forward. having an on die memory controller AND removing the archiac FSB



-------


<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 
Also remember I'm not talking NEAR term, I'm talking about in like 5 years, when the CPUs would be fast enough to do EVERYTHING that seperate add-in cards currently do in software mode, except much faster, and when they don't need all the extra power those discrete units could be directed to other tasks. Now it's impractical and slower than piecemeal solutions. But with the power of 24 Athlon 64 on one die then it would be quite easy to set aside 2 for audio, 6 for video, 12 for game AI 2 for mapping, 1 for communications, 1 for general PC health. That kind of thing. And when you were crunching numbers all 24 could be directed to that task. Each would have it's own cache memory and therefore could scale evenly In the POWER 5 example you would have about 128 mb of L3 cache (which would likely be faster than any memory) and 6 mb of L2 cache, and that's just based on current Power5 specs.

Anwyhoo that's alot of power within a very small package, and if it were that configurable then it would make for a very nice flexible unit, that one minute edited video or cruched massive numbers, and the next moment was a top of the line gaming rig.

Well I think that would be nice. And Eden you could still customize, it would just be more about driver/software customizations.

Well that's my view of the future even if it might be warped.



- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil:
 
Also remember I'm not talking NEAR term, I'm talking about in like 5 years, when the CPUs would be fast enough to do EVERYTHING that seperate add-in cards currently do in software mode, except much faster, and when they don't need all the extra power those discrete units could be directed to other tasks. Now it's impractical and slower than piecemeal solutions. But with the power of 24 Athlon 64 on one die then it would be quite easy to set aside 2 for audio, 6 for video, 12 for game AI 2 for mapping, 1 for communications, 1 for general PC health. That kind of thing. And when you were crunching numbers all 24 could be directed to that task. Each would have it's own cache memory and therefore could scale evenly In the POWER 5 example you would have about 128 mb of L3 cache (which would likely be faster than any memory) and 6 mb of L2 cache, and that's just based on current Power5 specs.

Anwyhoo that's alot of power within a very small package, and if it were that configurable then it would make for a very nice flexible unit, that one minute edited video or cruched massive numbers, and the next moment was a top of the line gaming rig.
There is a problem here inherent: The entire purpose of dedicated MPUs is that they have the silicon made for the purpose. GPUs have a special pipelining system, with their own stages (as little as there are) making them pretty much anything but an x86 CPU. With GPUs, you need 500MHZ to create the performance a 4GHZ CPU would do. Look at any software emulated game mode, and tell me if the most modern CPUs can even do that.

Now you would say "It's five years from now", but I would ask you, what's different of the context then than now?
We'd still be people looking for performance, companies would still have fabs R&Ding better silicon and smaller processes. In fact, the chances of finding smaller processes beyond 45nm are slim now, as leakage transforms advantages into bad ones.
The idea of many CPUs in one is good as long as it's not overdone. Multicore CPUs are coming. But the way the thermals will be handled will be a challenge.
Even if IBM designed such chip, there is no doubt it won't be cheap, it won't run cool, and it will needs serious coding for it. Heck, such a huge amount of cache, I would speculate doesn't help anymore. At some point, your higher associativity algorithms in the locality of data in cache/RAM becomes no longer a positive and the latency in there destroys that advantange.

I can understand we will move into more multi-core CPU purposes, but I cannot understand how software emulated processes with CPUs will have rather than perhaps having a core of GPUs on the same die.
At the same time, I also don't envision such to work too well. Way too much engineering required to set the crystals on each die, too much requirements for cooling, and even with 0.045m you will have big cores. Not many companies have moved to 300mm wafers, which cost a lot BTW. Even with 300mm wafers, you are left with severe problems making some revenues off expenses from fab work.

And then, the final question: WHICH company would accept manufacturing all in one die without rival companies going at it?

And Eden you could still customize, it would just be more about driver/software customizations.
Suppose 7.1 channel 32-bit FP precision EAX 5 Audio technology came out. Can you figure out how to avoid paying 2000$ to upgrade to such new technology?

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
Well as we seem to be running out of die shrinks we can allways do dual cores...
And the AMD K8 is apparently ideally setup for dual core operation.

<b>I am not a AMD fanboy.
I am not a Via fanboy.
I am not a ATI fanboy.
I AM a performance fanboy.
And a low price fanboy. :smile:
Regards,
Mr no integrity coward.</b>
 
If there is one thing I am glad right now, is that you are informed, and that we both respect each others, so I can actually have fun debating with no aggressivity like I tend to do with some here.

BTW, dude, have you been hiding your knowledge? Did someone sneak on you in the toilet jacking off, and you finally revealed your tiny wonka?!

That's great to hear you knowing some stuff here.

On to the debate before I go shagging you.
hmmm... you say that haveing everything in one processor will create too much complication and heat... . i dont think it will be like this as architectures improve and shrink. heat somewhat has to do with the architecture... but if a processor is being produced on a .005 micron die i doubt that will matter.
Prescott at 0.09m can't have lower heat than the P4 3.2C. Even with strained silicon and Low-K dieelectric. In fact, it's becoming more apparent shrinking transistor size will be for smaller dies with more crammed goodies, instead of reducing heat which actually seems to go up. You will isolate some of the leakage problems causing this with SOI, but you can't hide the problem. It's a serious issue growing with each process shrink from now, with physics being involved greatly. The next transition will be even more problematic when it comes to controlling leak. Luckily Intel has the 3D transistor up and the Terahertz one for such issues, plus SOI at full strength (AMD and IBM use partial). But again, it won't hide the problem. Merely extends the time before it comes back.

PCI-Express will nice but it will still have only 6gig/sec thru-put
Can you find me any hardware other than GPUs that will request over that much of bandwidth?
Even future 32-bit FP Audio, or high quality multi channel audio will not need it. Simply put, PCI is more than enough for excellent sound cards like the Audigy 2.

Now look, I am not going to deny you have a point with centralized functions.
But that means you are effectively creating a monopoly on the chip you make which addresses all functions. I can only imagine independant OEMs going for such with their own fabs. Major cost savings. But who else would take such tasks?

And most importantly, how can you not compromise your flexibility?
AMD knows how important this is because they included the memory controller right on the die. this ALONE increased the speed of the K7 design because of the reduced latency
Like I said, flexibility goes away. I am by no means however opposed to the ODMC. Several objected but I think it's ridiculous. We don't even have many mainboards which support two kinds of RAM, like SD SDRAM to DDR SDRAM, to have flexbility issues. Nor does the K8 need more than DDR400 right now.

But it becomes a serious issue when you customize a multipurpose chip. Now suppose that we had the same design from IBM's, but the cores were actual removable chips with sockets. Hey, that'd be great, flexibility is maintained. But I wholeheartedly doubt an MPU of a dedicated card will work alone without its own special hardware.
Take Network cards. Where do you put the DACs? Where do you insert the MAC address and the ROM? All of the Network card hardware?

This is what I can't fathom being that advantageous.
Granted it would be great, but at the cost of freedom to tweak our hardware. I wouldn't know. Who would agree to stop making bling-bling on his video card! :smile:

Like I said, I envision smaller form factor boards, but mainboards will continue existing. Already companies like Sun (or was it some other company) have found ways to shorten paths and make some serious speed ups. Some article touted a company's claims you won't need quantum computing's problems to get such performance with x technology they researched. I wish I kept the docs, but you can look on THG's Hard News and get there.
One of the technologies conceived recently or researched for better latency times, was the X design on PCBs. By using diagonal circuits than conventional direct ones, you effectively cut distance to cross. Remember, the speed of travel of electricity is greatly affected whenever you increase distance. It is said for modern MPUs, according to the CPU forum's data and links, 300mm dies are the max acceptable without losing signal speed for clock speed.
But that's a whole other subject.

Either way, good debating with someone who I respect and knows some of his crap without being biased.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
Damn PMS days. :tongue:

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
LOL chill dood!

I was pointing out that while dieshrinks get harder and harder, its allways possible to evolve in different directions.

And this applies to either CPU's or GPU's.

Terribly Sorry if you couldn't read between the lines. :smile:

<b>I am not a AMD fanboy.
I am not a Via fanboy.
I am not a ATI fanboy.
I AM a performance fanboy.
And a low price fanboy. :smile:
Regards,
Mr no integrity coward.</b>
 
I haven't yet heard of someone who writes text processed by a GPU but I guess everything is JPEG graphics in yer world eh! :tongue:

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
LOL gonna be hard to play games with just a GPU!
hehehe

Did you hear of old graphics cards being used for very specific matamatical computations?
Interesting stuff.
Use them more like refined limited CPU's :tongue:

<b>I am not a AMD fanboy.
I am not a Via fanboy.
I am not a ATI fanboy.
I AM a performance fanboy.
And a low price fanboy. :smile:
Regards,
Mr no integrity coward.</b>
 
I understand what you're saying the thing is at a certain point, you will have more than enough power to do photorealistic rendering, and while an add-in will still be the BEST option, it may be something that becomes a rarity in the way that workstation cards are. Only the most avid gamers would make the additional effort/expense to get an add-in card with that power.

This is all a guess and perhaps they will have some mobo stuff that only requires the additional oomph from the processors. And while making some of 'etchings' creat a vpu section, it may simply be the front end, and scalability may allow it to process much larger chuncks without dedicating so much room for VPU only. The main thing will be that flexibility (a big key to selling computers IMO).

WHICH company would accept manufacturing all in one die without rival companies going at it?


It would likely be the usual suspects and simply squeezing out the rest, just like anything else, whomever makes it will profit from it, anyone can try to do it, but how many companies have the money for the R&D?

Suppose 7.1 channel 32-bit FP precision EAX 5 Audio technology came out. Can you figure out how to avoid paying 2000$ to upgrade to such new technology?
I don't see a $2000 upgrade coming out of it. I see it more of a likely (software included with the purchased media type of thing, the way many generic PC DVD playing software comes with DVDs.

The main issue IMO will be how you create the I/O connections and such. Having the ability to power an array of monitors, or 11+ audio channels and such doesn't necessarily mean you will get that support on each mobo.

Anywhoo, it's just a thought. I have a feeling that this is the way things will go, although as usual I could be wrong.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil:
 
If there is one thing I am glad right now, is that you are informed, and that we both respect each others, so I can actually have fun debating with no aggressivity like I tend to do with some here.

BTW, dude, have you been hiding your knowledge? Did someone sneak on you in the toilet jacking off, and you finally revealed your tiny wonka?!

That's great to hear you knowing some stuff here.

haha... of course i respect you. yes yes blah blah i know i have been reluctant to get into debates the last month.. but the forum hasnt exactly been a barrel of monkeys (the Others issues, you know what im talking about) and quite honestly i got frustrated and have tried to respond quite a few times to people but i found myself just reaching up and hitting the X at the top right of the window because i frankly just didnt have the energy.

Prescott at 0.09m can't have lower heat than the P4 3.2C. Even with strained silicon and Low-K dieelectric. In fact, it's becoming more apparent shrinking transistor size will be for smaller dies with more crammed goodies, instead of reducing heat which actually seems to go up.

this is somethign that you know alot more about than i do.. i was under the impression that architecture played a large role in heat output (VIA C3 @ 1ghz not even requiring a heatsink?) but really that was just from the casual reading ive done ;P


Can you find me any hardware other than GPUs that will request over that much of bandwidth? (response to what i said about PCI Express)

welll no.. latency seems to be the issue. i guess i was thinking of the wrong thing when i said that lol.. having the Sound processing unit on the same die as the CPU would enable it to be availably immediatly. instead of having to wait 1000 or 1 million CPU cycles, depending on the situation, for the PCI bus to catch up\


do you know if PCI express will have a lower latency than teh traditional PCI bus? i havent found this info anywheres

Like I said, flexibility goes away. I am by no means however opposed to the ODMC. Several objected but I think it's ridiculous. We don't even have many mainboards which support two kinds of RAM, like SD SDRAM to DDR SDRAM, to have flexbility issues. Nor does the K8 need more than DDR400 right now

who would know the core of a CPU better than the people who created it? AMD and Intel chips work better in different situations.. P4s like high bandwidth blah blah etc etc

i dont think anyone knows the Athlon better than AMD. 3rd partys may get lucky .. or like in nVidias case, they may take their knowledge from other areas and try to apply it (nvidia obviously has experience with memory controllers..they design video cards which are like mini-motherboards all in themselves.. tailored to a custom purpose but really what is a video card? it is similar to a CPU and the way its set up with its cache and ram and memory controllers on a PCB to interconnect the pieces)


oh.. and btw there was a motherboard that supported DDR and SDR. the ECS K7S5A had both SDR and DDR slots..the SIS 735 chipset was underrated, i owned one :)

Take Network cards. Where do you put the DACs? Where do you insert the MAC address and the ROM? All of the Network card hardware?

well.. things evolve. i cant answer your questions, but things WILL change someday.. MAC addressing gets into debating the structure of the internet itself.

your probably right about this..






this has gotten to be a little heavy on the theory behind how computers work ... something which im very n00bish about. id like to get silverpig or papasmurf in here but they dont post much in this section
-------


<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 
Maybe it happened two years later than it should have, but Halo finally landed on the PC — and boy is it a hardware hog. For a port taken from a console system with a 733MHz CPU, 64 megs of shared RAM, and a graphics chip somewhere between a GeForce 3 and 4, Halo sure does run slowly at times, even on the strongest PC.

When Microsoft acquired Bungie, and the hotly anticipated PC and Mac game Halo became an Xbox launch title, the whole thing got reworked from scratch. Gone was the huge multiplayer-focused, seamless-world shooter Bungie had been working on, as the game morphed into a level-based, plot-oriented single player shooter. The engine was greatly overhauled, lots of artwork was rebuilt, and level designs became more traditional. Since time was short and the Xbox hardware was sort of nebulous during much of the remaining development time, the code was kind of ugly. This, along with the inherent differences between the Xbox and a PC (unified memory architecture and such), is probably why, even on a fast PC and killer video card, the game can slow to a crawl during heavy firefights.
the part that im posting for is the information about the unified architecture of the Xbox. it totally reinforces what i said about having a single core, or perhaps 2 cores on one die and therefore on one bus sharing the same memory


now imagine if the memory controller was built into the P3 in the xbox.. add gains comparable to what teh Opetron gains from this, and it wouldnt suprise me in the LEAST if that specialized P3 setup could compete with a P4 3.2ghz .... of course theres the fact that the shaders were originally written for PS 1.1 on the xbox's custom GPU... but ATI shaders are lightyears ahead of what teh GF3 and GF4 have and should be able to actually perform these instructions faster due to teh fact that PS2.0 can perform alot of instructions that used to be lengthy in one pass

its obvious how much of an impact the platform has on system performance. everyone knows this.. these limitations
cause CPUs to sit idle most of the time.

you can also look at teh P4 and how it has evolved... has the core changed? no. its been 100% platform improvements

-first the move from crappy RDRAM to DDR for latency purposes.
-increased the L2 cache to 512
-then an incraese in the FSB so that less of those CPU cycles went to waste...
-then dual DDR
-teh 800mhz FSB introduced
-then the P4EE with 1meg of L3 cache AGAIN to help keep idle cycles low

^
those points could be out of order and stuff could be missing

picture a 1ghz P3... now picture that P3 running at 1ghz FSB (its possible with motherboards now.. of course not with existing chipsets, but it has been done) and Dual DDR (or perhaps a more advanced memory solution like DDR2)...

i mean wow~. i bet you it would perform as good, if not better than the fastest pre-built system available for purchase.. im speculating yes but just take the P4 and Athlon (and its evolutions) and see how platform improvements alone have helped

thast why i think having every single funtion of a computer aside from storage and ram should be on the die. aside from onboard NIC and sound which could soooo easily be added to a CPUs design the motherboard serves only as a bridge. and its a FVKCINg slow bridge at best. why not eliminate it?

sorry if ive repeated what ive said before.. i dont have time to edit and structure my posts today.. holy [-peep-] its 8:30 already and i havent even studied for the test i have tomorrow. currently im training for a new job.. plus im having problems with my girlfriend so thast why ive been away lately :)

-------


<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 
It would likely be the usual suspects and simply squeezing out the rest, just like anything else, whomever makes it will profit from it, anyone can try to do it, but how many companies have the money for the R&D?
Indeed that would reinforce my argument on the problem in this. It becomes so proprietary, so costly for R&D, it simply wouldn't be too ideal. As I've stated however, if the big chip was socket connected, then I would not see a problem. Effectively the chip itself becomes a motherboard with insertable processing units for diverse purposes.
But then I'd have to stop here as indeed there is so little we can know of how things will go. I speculated about on-going form factor shrinks, to the Mini-ITX like VIA's, and for now I'll stick in the medium-term with this. Long-term is indeed too far.

you will have more than enough power to do photorealistic rendering, and while an add-in will still be the BEST option, it may be something that becomes a rarity in the way that workstation cards are.
Let's wait and see then. It is indeed unknown if we'll ever find an apex of performance. Office applications have found a rest.

-Although, as a side note, here's a very funny anecdote from today's text treatment class: We were learning on how to take PDFs and put them in RTF and use them in Word, then use Search and Replace with special tags to see how easy it is to recreate paragraphs out of the no-style mess the RTF brings. It was so funny man, we were working with a PDF on Final Cut Pro 3, Apple's video editing software, and it had about 600 pages. Transfered onto Word pretty fast. But the funny thing was when we did the S&R, and one of the functions was to find all periods and put a break after them! Holy crap, never have I seen text processing actually stress the CPU on a mathematical algorithm solving magnitude! It freakin' looked like MP3 encoding! Thing took like 30 seconds to process! ROFL! It was a 1.4GHZ P4 with RDRAM, but damn man, I never thought in this day and age poor ol' Word XP actually could manage to grab the modern CPU by the nuts!-

Games are far from it. Every time you come out with technology, games go further, everytime games go further, BAM, new CPU and video card. It's ironic they never followed the GPU DX8 theory, to solve all CPU involvment by effectively removing its job from it save for AI calculations. Bravo companies. Aside from Massive and a few. So what are we left with? Yup, games which continue to demand better CPUs. Now a 3.2GHZ does 60FPS in games with a 9800PRO, WOW! 😱
Programmers will never learn, so for now and a clearly longer time to come, CPUs and their involvment will stay and we won't arrive yet to a "point of diminishing returns or unneeded returns".
Encoding-wise, well, I don't fathom MP3 to go away so encoding MP3 will stay fairly the same and future CPUs should have the task done easily.
DVD encoding is still young but modern CPUs do great with it.
CG movie editing, hmm, I wouldn't know exact times, but clearly we have a long way to go. However I don't see rendering being that far from becoming good enough. Yeah Raytracing takes loads of time. But eventually we should find a point where ultra-realism is good enough........or is it?

I don't see a $2000 upgrade coming out of it. I see it more of a likely (software included with the purchased media type of thing, the way many generic PC DVD playing software comes with DVDs.
I don't think you followed me. I was simply saying that since much superior audio quality is achieved with better hardware, you would be needing to replace the entire comp to take advantage of it. Where would the software analogy fit in? I'm not quite following you.

But I understand better hardware requires support. I guess it's still a confusing matter for all of us.

Ahh astonishment, what got the Pre-socratics to think! :smile:

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
haha... of course i respect you. yes yes blah blah i know i have been reluctant to get into debates the last month.. but the forum hasnt exactly been a barrel of monkeys (the Others issues, you know what im talking about) and quite honestly i got frustrated and have tried to respond quite a few times to people but i found myself just reaching up and hitting the X at the top right of the window because i frankly just didnt have the energy
You frook!

Well, anyways, you really should come back to the OTHER forum. Scam and DH have agreed to a treaty in fact. They even knew they won't fight in a thread and agreed to let go from that point. The problem now is Spud. :frown: Banned, judged prematurely, now can't return. Do you know how it feels to be judged badly, called a racist and get banned never getting the chance to explain yourself? He switched to DSL JUST to come back posting with a dynamic IP. That's how much the guy loves this place and doesn't want to break it off.

If there is any fighting in the OTHERS forum, it's over Spud and it ain't even dangerous like Scam and DH.

Damn man, I miss you there, with the bottom rimming you did with Wingding!

have tried to respond quite a few times to people but i found myself just reaching up and hitting the X at the top right of the window because i frankly just didnt have the energy.
I think I was one of those people then.... :frown:

On to topic:
this is somethign that you know alot more about than i do.. i was under the impression that architecture played a large role in heat output (VIA C3 @ 1ghz not even requiring a heatsink?) but really that was just from the casual reading ive done ;P
That's what most of us thought, and it does, except that it can't do a thing towards what the silicon itself does. If your 0.09m process leaks so much, your architecture suffers. But your architecture shouldn't affect the transistors, because no matter what layout they take, it's how they're made that affects the leakage. Clock speed and core intense usage comes later on for the heat equation.

having the Sound processing unit on the same die as the CPU would enable it to be availably immediatly. instead of having to wait 1000 or 1 million CPU cycles, depending on the situation, for the PCI bus to catch up\
LUCKILY you said SPU and not the CPU doing sound processing! :wink:
That is the keyword and is what is important to remember. Dedicated MPUs do a better job than emulating. Think Itanium 1 32-bit.

do you know if PCI express will have a lower latency than teh traditional PCI bus? i havent found this info anywheres
Nope, so I won't use my mouth to spout anything.

who would know the core of a CPU better than the people who created it? AMD and Intel chips work better in different situations.. P4s like high bandwidth blah blah etc etc

i dont think anyone knows the Athlon better than AMD. 3rd partys may get lucky .. or like in nVidias case, they may take their knowledge from other areas and try to apply it
Hmm I don't know whether your reply was to agree with me or not. All I can tell is recalling my statement which was that I didn't object the flexibility for the ODMC because you really don't get THAT many RAM technology changes anyways, nor speed upgrades. DDR400 seems sufficient so far. Even the 2.8GHZ Athlon 64 FX doesn't utilise the entire 6.4GB/sec!

oh.. and btw there was a motherboard that supported DDR and SDR. the ECS K7S5A had both SDR and DDR slots..the SIS 735 chipset was underrated, i owned one :)
I never implied not, nor was it my intention to call for such mention. I was saying there's not THAT many occurances with two different technologies supported, which lowers the argument that the ODMC is a bad thing for flexibility, thereby agreeing with you.
MAC addressing gets into debating the structure of the internet itself.
I wouldn't be agreeing too much. Based on the most simple network class I've gotten so far, level 2 networking needs MAC adressing, and is not much directed at the Internet structure but a network structure for enterprises and such. Most of the time data goes through the IP and the socket to find its target on a PC.
this has gotten to be a little heavy on the theory behind how computers work ... something which im very n00bish about. id like to get silverpig or papasmurf in here but they dont post much in this section
Ok, two things:
1) I never knew Silverpig even had that much knowledge into this. Where did you see him discuss so much like we're doing?
2) PS? HA! No offense man, but PS had got nothing on even you. I don't know where you thought of him as knowledgeable in computer theory but damn, it's far from even the people here. I'd say you know better than that guy.
I could be biased, since I can't stand that misinformed homof00b. (to quote Svol :wink: )

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol:
 
the part that im posting for is the information about the unified architecture of the Xbox. it totally reinforces what i said about having a single core, or perhaps 2 cores on one die and therefore on one bus sharing the same memory


now imagine if the memory controller was built into the P3 in the xbox.. add gains comparable to what teh Opetron gains from this, and it wouldnt suprise me in the LEAST if that specialized P3 setup could compete with a P4 3.2ghz .... of course theres the fact that the shaders were originally written for PS 1.1 on the xbox's custom GPU... but ATI shaders are lightyears ahead of what teh GF3 and GF4 have and should be able to actually perform these instructions faster due to teh fact that PS2.0 can perform alot of instructions that used to be lengthy in one pass

its obvious how much of an impact the platform has on system performance. everyone knows this.. these limitations
cause CPUs to sit idle most of the time.
Phial, I can take out a good point from there, but I can find the bad point which shows that your good point isn't used in the right context:

-The good point you have is obviously unified architecture. Dedicated.
But it's just that, dedicated.
-The bad point is that it doesn't relate much to PCs. You can unify them as much as you want man, but the reason they can't reach X-BOX level performance is -aside from the fact X-Box's graphics CAN BE and ARE surpassed by modern games but it looks like not, because the polygons on a TV get the blur treatment from the low resolution which creates a feel of high poly- is because a PC is a multi-purpose computer.
You just can't have the Xbox design for a computer, because if you did, and used it for daily computing needs, it's garanteed to be as slow if not slower than the PCs we have. Think about it:
-Windows loaded
-Apps loaded
-Background tasks and processes loaded
-Background apps
-Monitoring software like AVs
-Bloated OS architecture using much of the RAM

And maybe more.
This all easily destroys the advantage PCs could have over console performance. Consoles do one thing and one thing only at a time, almost DOS-esque. Remember the DOS performance?
Everything is dedicated to the game processing. And yes, your main point derives from it, unified and closed architecture. But its mainboard layout is really NOT too different from modern ones, and the locations of the components are about as similar as modern on-board hardware!
X-BOX simply and mainly thrives on the uni-purpose design philosophy.

There is of course the fact console coders don't need to code for different hardware, predict driver problems, etc. They code specifically for the max possible usage of the processing units' execution time. For PCs, so much is wasted, a P4 3.2C may never even use all its clock speed and its units. Not even the K8 steps above 3IPCs on average. It's all about coding and optimizing. Consoles get more than golden treatment, they get platinum! They get their sausages sucked dry! :smile:

first the move from crappy RDRAM to DDR for latency purposes.
I disagree, RDRAM is a far better technology had it been exploited right. True, latency was a problem. But before the Canterwood and PAT, a 4.2GB/sec PC1066 i850E would kick the nuts out of any Dual Channel DDR2100 at 4.2GB/sec as well. Intel really squeeze RDRAM's power. If Rambus was better than that and resellers sold RDRAM for a good price, Yellowstone would've been out, and kicked arse. Alas, DDR was the market's demand.

But I agree, platform does a lot. For K8 it won't, though, because they use no more Northbridges, the main center for mobo performance. Now you base your purchases on features really. Aside from the stupid 600MHZ HT link on nForce 3 150, 800MHZ HT is a standard so no worries there either.

picture a 1ghz P3... now picture that P3 running at 1ghz FSB (its possible with motherboards now.. of course not with existing chipsets, but it has been done) and Dual DDR (or perhaps a more advanced memory solution like DDR2)...
Hmm I'm not sure if I can take that as either realistic or plausible. 😱

thast why i think having every single funtion of a computer aside from storage and ram should be on the die. aside from onboard NIC and sound which could soooo easily be added to a CPUs design the motherboard serves only as a bridge. and its a FVKCINg slow bridge at best. why not eliminate it?
Aside from my flexibility argument, suppose they did integrate it all. You'd still need circuit on the side to address sound card needs and video card needs as always (like the DACs), but most of all, where will the RAM and HDD be plugged into? Obviously the mainboard will continue existing!
sorry if ive repeated what ive said before.. i dont have time to edit and structure my posts today
Naw, I'm enjoying this, you bring up points and I bring up mine. All's well and you also provided some quoted data, which is smart in a debate. Now I'm horny.

plus im having problems with my girlfriend so thast why ive been away lately :)
So she finally saw your attempt to come out of the closet and dragged you back in there eh! :tongue:

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>This just in, over 56 no-lifers have their pics up on THGC's Photo Album! </b></font color=blue></A> :lol: <P ID="edit"><FONT SIZE=-1><EM>Edited by Eden on 10/21/03 10:11 PM.</EM></FONT></P>