ATI vs. Nvidia technical question

azconnie

Distinguished
Sep 26, 2009
199
0
18,690
OK. Sombody please 'splain this to me.

The Nvidia GTX 260 core 216 55nm (the only card in the history of GPUs whos name is longer than its part number) contains:

216 cores @576 mhz
GDDR3 on a 448 bit bus with a mem clock of ~2000
32 ROPs
and is on PCIe 2.0

The ATI Radeon 4870 contains:

800 cores @ 750 mhz
GDDR5 on a 256 bit bus with a clock of 900
16 ROPs
and is also on PCIe 2.0

How do these cards perform on par?

Dose having 1/2 the ROPs and mem bus actually slow down 800 SPs to the performance of 216? If so, why dosn't ATI just bump up the mem bus to 512 and add some ROPs? What archatectural faux pas, as it seems, cripples 800 SPs that much?

Also, the Nvidia site has a Fermi (GT300) page for the who ever is curious. Oh, and for the next person who needs GT300 rumors, it makes waffles and massages your feet. :D

 

rescawen

Distinguished
Jan 16, 2009
635
0
18,990
ATI cores are divided by 5, dont remember specific reason. so basically its 216 cores vs 160 cores from ati.

Memory doesn't affect the speed of the graphics card, but rather the resolution u wanna play it on.

for example u can compare the speed of ati and nvidia memory by doing the following. The bit, in other words the bandwidth of information which can pass through is 512 bit having a speed of 1800mhz is the same as a 256 bit with a speed of 3600 mhz. Therefore bit multiplied by speed is the real memory speed.

Dunno what rop si maybe another guy explain that.

so the real comparison would be 216 core at 576mhz vs 160 core at 750 mhz.

448bit*2000 mhz vs 256bit*3600 mhz which is about the same.

In the end the cards perform about the same.
 

azconnie

Distinguished
Sep 26, 2009
199
0
18,690
OK. Interesting. That makes sense. Now I just have to wait for the "next guy" to know why *essentally* only 1 in 5 ATI cores work.
So in the memory, bits are like channels (or lanes) and the speed is the... well speed. Kinda like volts x amps = watts?
 
This is why you can't judge a GPU by its paper specs. There are ways to try to divide core count etc. to try to make sense of how 800 = 216, but really what it comes down to is what ATI calls a core and NVidia calls a core are two completely different things, thus they really aren't comparable. If you coded specifically for the 4800, it would be a lot better than it is, and the same is probably true for the 260. However, most games are coded more in the middle, thus the weird core equivalencies.
 

azconnie

Distinguished
Sep 26, 2009
199
0
18,690
OK. So it's not that only 1 of 5 works, but rather that 4 out of 5 are like ALU secrateries for 6 instructions per clock sequence. So really the ATI archatecture as a useless mass paralell of inferior cores working in basically a huge cluster of pentacore shaders forming (SP/5) actual SPs. No wonder they run hotter. Would explain how there cheaper though, make cheaper parts, but use so many it works just as good at the other brand. Dosent that open for more errors in rendering, and woudn't losing one core resu;t in losing 6 cores? Who designed this system.

Also, for those whom open Mactronixs first link, I have proof Democrats have a thourough knowledge of computers. There is no other way healthcare reform could have that many lines in a singl flow chart. :lol:
 
Actually, ATI cards don't run hotter. They can run hotter if you make them, but I'd say that would point towards better quality. my 4850s do 35C-60C and my 4650 is idling at 30C right now. Temperature depends on the quality of the fan and heatsink more than the chip. If you actually compare TDPs, I believe they are quite close.
 


Remember i did say those were outdated reviews the Arch has been improved twice since then. Also worth of note is that we dont know what ATI would have built if they were not trying to make a card that was fully DX10 compliant which was really closer to what we now call DX10.1. As well as not knowing what Nvidia would have come up with if they had been sticking to the remit. Microsoft quite famously moved the goalposts when Nvidia couldnt do it and ATI were put back quite a bit by this. they decided to stick with the archetecture though.

Mactronix
 

azconnie

Distinguished
Sep 26, 2009
199
0
18,690


Yeah, I see what you mean. I actually prefer the ATI stock cooler to the Nvidia one. It just looks more effective, and seems to deliver. I'm just angry that Nvidia put the 51MM mounting holes on the G200s so the 53MM G92 coolers woudn't fit them. Now I gotta wait months for a sub $100 waterblock for the 260. I wanted that liquidcooler with the fan that Thermaltake makes, but thats a nogo unless I settle for a 9800GTX+, and that ain't gonna happen.
 
That's a pain. I know when I replaced the cooler for my 4850, it took forever to find a nice air cooler that would vent out the back and fit. It sure would be nice if there was some standardization or continuity, but that is one thing I am certain will never happen.
 

azconnie

Distinguished
Sep 26, 2009
199
0
18,690


Yeah, OK. So Microsoft dose it again, thet take a great potential, and slash it for profit *cough* 360 *cough.* So the reason Fermi didn't come out last year is because MS reconfigured DX10 before either company built for it leaving each with an almost DX10 card that they had to scramble to correct? Would eplain the "update" G92 brought us.
 

azconnie

Distinguished
Sep 26, 2009
199
0
18,690


And it is for this reason I will NEVER case mod, or repair, or even own a HP computer.