Hello,
I know that this topic has already been discussed. However I start this thread because I need a GPGPU and because I found an interesting test.
Premising that I need to run heavy parallel financial calculations with Matlab, I would like to find the best value for money 😀 (and I think this is not unusual...).
So I started seraching and I found this:
http://blog.accelereyes.com/blog/2012/04/26/benchmarking-kepler-gtx-680/
Please, let me say that "bechmarks/concrete tests/measured power" should be the intelligent and honest base for making a comparison.
These are facts beyond the point that "Tesla is for computing and GeForce is for gaming ... end of discussion". I think this is the way Nvidia wants it to be.
The point is that Tesla and GeForce cards have a so similar architecture that performance cannot be so different in term of G/flops single/double precision. (I won't post the specifics of the cards that you already know).
If you click on the link above it is clear that:
a) In single precision GTX 480/580/680 are much faster than a Tesla C2070 (580 and 680 are around 2x faster)
b) In double precision Tesla C2070 is around 1,5x faster than GTX 580/480. Surprisingly 680 finishes LAST.
A little detail: Tesla C2070 costs around $2.000, a GTX 580 costs around $200/300$ on ebay... Hypothetically with the same money I could buy 10 GTX 580 kick the ass of a Tesla.
What I mean is that the difference in price between Tesla and GeForce it's not justified even for heavy calculation purposes, one GTX 580 is enough to be near the performance of a Tesla with 1/10 of the cost.
If all this is true, Nvidia is aware of it and this should explain why in their benchmark they always do not test GeForce cards in computation power against Tesla cards. They want to keep, as much as possible, the two businesses separated, otherwise they wouldn't be able to sell other Tesla cards with that premium price.
The part of the story that really pisses me off is that they are not only very unclear about the difference in performance between Tesla and GeForce cards but they have also introduced limitations on new Geforce cards and this should explain why the 680 is the slowest in double precision calculations.
Why if you want to use the card to play games you pay 200/300 $ and if you want to use the same card for another purpose you pay thousands of $??.
Dear Nvidia, IMHO, I think this is not a fair behaviour, so unless someone will demonstrate me with facts/bechmarks that the premium price for a Tesla is justified I am going to buy a GTX 580 for my researches but you should pray that ATI does not implement their GPUs in Matlab because at that point, if you continue with this behaviour, you will lose an old client!
I know that this topic has already been discussed. However I start this thread because I need a GPGPU and because I found an interesting test.
Premising that I need to run heavy parallel financial calculations with Matlab, I would like to find the best value for money 😀 (and I think this is not unusual...).
So I started seraching and I found this:
http://blog.accelereyes.com/blog/2012/04/26/benchmarking-kepler-gtx-680/
Please, let me say that "bechmarks/concrete tests/measured power" should be the intelligent and honest base for making a comparison.
These are facts beyond the point that "Tesla is for computing and GeForce is for gaming ... end of discussion". I think this is the way Nvidia wants it to be.
The point is that Tesla and GeForce cards have a so similar architecture that performance cannot be so different in term of G/flops single/double precision. (I won't post the specifics of the cards that you already know).
If you click on the link above it is clear that:
a) In single precision GTX 480/580/680 are much faster than a Tesla C2070 (580 and 680 are around 2x faster)
b) In double precision Tesla C2070 is around 1,5x faster than GTX 580/480. Surprisingly 680 finishes LAST.
A little detail: Tesla C2070 costs around $2.000, a GTX 580 costs around $200/300$ on ebay... Hypothetically with the same money I could buy 10 GTX 580 kick the ass of a Tesla.
What I mean is that the difference in price between Tesla and GeForce it's not justified even for heavy calculation purposes, one GTX 580 is enough to be near the performance of a Tesla with 1/10 of the cost.
If all this is true, Nvidia is aware of it and this should explain why in their benchmark they always do not test GeForce cards in computation power against Tesla cards. They want to keep, as much as possible, the two businesses separated, otherwise they wouldn't be able to sell other Tesla cards with that premium price.
The part of the story that really pisses me off is that they are not only very unclear about the difference in performance between Tesla and GeForce cards but they have also introduced limitations on new Geforce cards and this should explain why the 680 is the slowest in double precision calculations.
Why if you want to use the card to play games you pay 200/300 $ and if you want to use the same card for another purpose you pay thousands of $??.
Dear Nvidia, IMHO, I think this is not a fair behaviour, so unless someone will demonstrate me with facts/bechmarks that the premium price for a Tesla is justified I am going to buy a GTX 580 for my researches but you should pray that ATI does not implement their GPUs in Matlab because at that point, if you continue with this behaviour, you will lose an old client!