Kepler news and discussion

Page 75 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

i think i remember some old article from a few months ago with:
660ti=below 580
660=570
650ti=560ti
650=560

or something like that. but the 660s had 256 bit bus and not 192 . . .
 
looniam

that seems realistic ....

but there is allways stock problems with 600 series ;////// wtf is that ? was there same problem with 500 series ??? i can't buy gtx 670 OC edition for 2 weeks !!!! prices are only at 500$ !!!
 



I said a third card not a third cut. 680 = full chip, 670 = first reduced chip then the possibility of a second further reduced chip for the 660. That would drop it right on top of the 580 performance wise by my reckoning.
Then GK106 would then drop into the 570 area with a variant probably the non Ti version landing in the 7850 performance bracket, as Nvidia don't have a card here at the moment. They could probably make a further sub section by reducing the bandwidth/memory as well, just like the 460 used to.

That's my take on it anyway, a lot depends on how the GK106 performs, I cant see Nvidia further reducing the GK104 if they don't need to and the GK106 would obviously, as someone said, be much more cost effective to use. However if yields give them a load of chips available to make a reduced card then that's what they will do.

Mactronix :)
 
GTX 680 = 8 smx
GTX 670 = 7 smx
GTX 660TI = 6 smx
GTX 660 = 5 smx
GTX 650 = 4 smx


if GTX 660 will cost 300$ .. then it's not to good price point ... it will match GTX 680 in sli but will be more expensive ;/
 

You mean the 660TI right? :lol:
 

Yeah but the 660s/650s won't be made on the same architecture as that of the 670/680..

Edit: Or so I've heard at least.
 

Agreed
Now remember there were video leaks of gk107's performance in futuremark and it was showing decent performance for an entry level chip.So if yields were fine with gk107 then I'd say gk106 won't disappoint.This time we may see x50 cards performing better than their competition.
 
Sorry to bring this up again, but I need to reply to some points.



It's just a rule of thumb: the more GPU power you need, the more bandwidth from the PCIe you will use when in CFX or SLI. Even if you're fine in x8/x8, that means you're wasting some potential from the video cards, meaning you could increase the amount of pixels of your screen to saturate the PCIe or increase the quality of it.

And I think you got the wrong side of the post; I stated that mal didn't provide enough data to conclude that PCIe 3 wasn't needed for CFX or SLI at monster resolutions. Just that.

not really m8, you must be confused about the info I was spilling... 😛
have you had SLi set-up before, if not then how do you know it's a crappy solution.?
hmm.

having SLi older cards like 8800GT's in SLi do not count.

Well, like I told you. A friend moved a couple of 8800GTs in SLI from a Q6600 to an i5 2500k build and there was a huge amount of frames. That means, it was either with a bottleneck from the CPU or the platform. I'd say some of both, but PCIe 1 vs PCIe 2 was a huge jump in bandwidth for SLI and CFX and big resolutions.

I moved from a nForce 4 Ultra with my Athlon X2 4400 to the Phenom II and the 990FX I have today (passing through the 890FX). I did SLI testing at the time comparing the Q6600 and the Athlon X2 for the scaling (we were curious of the jump). The differences at 1080p weren't visible since the cards produced over 60FPS all the time, so I called it a day. I'm sure if I can compare those again with 3 1080p monitors, there would be a huge difference in favor of the Q6600 since it was able to feed the 8800GTs in a more constant way thanks to a lot of factors and not just the PCIe 1 cap.

A friend of mine also has 6870's in CFX using his 965 and he's restricted (or capped), but by the CPU. He doesn't have the full potential to saturate the PCIe 2 from his 990FX, so I'm sure (going back to my argument), currently there is no test that can show you how it will be in reality to discard fully that PCIe 3 is useless. We need more software and faster CPUs to note the difference with the current benchmarks at our disposal.

More over, when all this tests are done under lab conditions. Remember USB2/3 also uses PCIe lanes. SATA as well. I have a RAID 0 and it uses some bandwidth and we can add more and more. I don't know if the current sound cards use the PCIe lanes or legacy ISA, but they also use bandwidth (albeit minimal), but still use.

I like the conclusion....

Conclusion

When we look at the theoretical performance, we can see that some of the PCI Express 3.0 promise has been fulfilled.
Firstly, the increase in bandwidth does have an impact, particularly with GPU to CPU transfers, though this isn't as significant in the opposite direction even if there are theoretical gains of (just) 50%. The reason for this more limited gain is difficult to measure as we only have a single PCI Express 3.0 compatible platform for now,
Intel’s LGA 2011, and one PCI-E 3.0 graphics card, the Radeon 7970. Whether it comes from the interface, platform or card, it’s difficult for us to say as yet.

In practice the OpenCL applications that we were able to test are for now far from being limited by memory bandwidth. In already offering almost 7 GB/s, PCI Express 2.0 x16 covers most usages, even if certain very specific pro applications will certainly be able to take advantage of the theoretical gains.

On the gaming side, the increases in bandwidth aren’t of much more use. While there’s still a difference between PCI Express 2.0 x16 and x8 (a difference that is even more marked with PCI Express 1.0 x16 in CrossFire), in practice this boils down to no more than one percent between PCI Express 3.0 x16 and x8 modes with a single card and half a percent in CrossFire. This is positive for those who are hoping to use CrossFire on future Ivy Bridge platforms, whether with two cards at x8/x8 or three cards at x8/x4/x4 as in PCI-Express 3.0 x4 mode doesn't bring performance down too much.

I would remark that in red as well, mal.

Problem is, no one is actually putting strain in the PCIe bandwidth. And that is not contrary to saying we won't in the near future. Like all new bandwidth improvements in a certain area, we'll see new software using it down the road. Current games and professional software has gotten a grab on the new tech a few months ago. Give them some time.

That's why I say that we need Toms to do a very strict testing procedure to show us how different setups can make use of the aggregated bandwidth PCIe 3 gives.

Cheers!
 
@Yuka,
Point is though that reviewing at monster resolutions wont help test the bandwidth. The data remains more or less constant regardless of resolution, I think that's a little game dependant but from what I know its a pretty solid statement.
So as I see it upping resolutions and even going tri or quad on the GPU front will only show up CPU and or GPU restrictions.
Forcing the resolution down to increase the FPS will increase the traffic over the bus and mean the resulting differences are much more likely to be as a result of PCIE bus differences.

That's how I see it, happy to be proven wrong and learn a little though

Mactronix :)
 
lol, well good, now you will be familiar with my BIOS. since Sabertooth and Crosshair have basically same BIOS. and if he has a 980 oem chip. I seriously have a hunch that its AMD that wanted the Mutliplier locked... cause i noticed when i set the multiplier.... it didn't change it and when i reset.. it didn't change anything.
 



Yes, it does make sense, actually.

If the video cards need to process more data per frame, that means we're passing the burden into it's own memory interface (each video card), not the PCIe bridge. And if we're processing less information per frame, but need to draw them faster on the screen, the bandwidth used will be the PCIe one since we need to pass the frame information to the card in charge of drawing the image.

Ok, that still makes an interesting test. That means there's a sweet spot between GPU, internal bandwidth and PCIe interface speeds to have an optimal set up for a target resolution. And in that regard, I concede the point of PCIe, at this point, not being a point of concern yet. We need more cards that can process frames faster and certain settings to make good use of the PCIe 3 speed.

Cheers!
 
It means that, if you're going SLI or CFX for a single monitor setup at, say, 1080p, then PCIe might become the bottleneck for the whole setup. So far, it seems that way though. Currently, PCIe 2 x8 and PCIe 3 x4 *should* have enough bandwidth and latency to cope with the data.

Cheers!
 

ah crap!

sorry about posting it then, got all excited that it looked the GK 110 was getting used in a gaming card.

nothing to see here, move along . . . . .
 
Status
Not open for further replies.