Baumy15 :
Eduello :
The TITAN Z is basically two TITAN BLACKs in SLI while the R9 295X2 is two R9 290Xs in crossfire. This means that the TITAN Z is the most powerful GPU out of the two. The reason the AMD is said to be the fastest on the market is because the TITAN Z is not out yet.
So basically the Nvidia one is more powerful, but the AMD one gives you better bang for your buck.
only problem with that is the titan z is 3000 dollars and the 295x2 is 1500 dollars so you can get 2 295x2 for the price of one titan z so no mater what way you look at it the 295x2 is more powerful.
Seriously? I want you to read what you just wrote. Being able to buy two cards for the price of one does not make that card more powerful!
This is the sort of ignorance that irritates me.
If you wish to objectively compare the two cards, IGNORE THE PRICE POINTS.
They are both priced at obscene and arbitrary levels! Focus on the actual performance! Its not like anyone here will actually buy them either way.
I explained this earlier.
First, please look at the bare bones numbers. You'll realize that the cards almost equally match up, except the ATI card has a faster pixel rate and has better float-point performance. The nvidia card has loads better memory performance, and in the end will perform better in games + a wider spectrum of applications requiring 3D rendering.
As I said before, when it comes to these two cards, the performance will come down to the CODING API THAT IS THE FOUNDATION FOR THESE GPUS.
And if you haven't read about GCN, Mantle, Nvidia's API, OpenGl and DirectX and how they are translated to calls that the GPUs can communicate through, as well as Nvidia's card abilities to allow the CPU to deploy a child kernel onboard, you'll realize how much better the Nvidia card will be in the long run.
If ATI steps up their game and produces a much more robust and widely-used API (which probably wont happen because of the "network effect"... google it) We will still continue to have games and applications developed with Nvidia's architecture and API at the forefront of the developer world.
Its kind of like explaining to a child the difference between Volts, Amperes, and Watts.
Watts is a measurement of power. It's pretty much Volts X Amps.
Imagine Volts is the speed of which the water flows through a pipe, while Amps is the diameter and therefore the volume of which the pipe can flow.
Now, to apply this analogy so you can understand, think of the ATI card as having a screaming fast processing unit, that consumes a lot more energy, able to dump out a lot of calculations into a pipe that isnt very large, nor very fast. It's like a super genius working alone at a factory that puts his product on a conveyor belt, but the belt is very slow, and very very very small.
In opposition to that, you have Nvidia, which produces the card which has an insanely fast and large pipe, which can DELIVER a lot more processed data and faster while burning less energy, and instead of using "one super genious" it uses many many many smaller, but still quick and smart worker bees.
ON TOP of that, Nvidia's API is much more advanced, more widely used, and is the current industry standard marker for producing/developing game engines that work well for rendering under DirectX 11/12 and OpenGL 4.x+.
READ about the technology behind these cards! I would say ATI is in the dark ages, like back when Intel was still producing the Intel Pentium 4 Extreme Edition! screaming fast processor, but power hungry, hot, and only TWO THREADS. That's what I compare the 295x2 to. It's a fast GPU, don't get me wrong, but they've built it in a way where that power is not utilized most efficiently, and not bridled in to its full potential by the underlying software that drives it.
The Nvidia card is refined, uses less power, stays on par with its opponent, while still OUTPERFORMING it in the more common application.
Please do not get me wrong, I have been a proud owner of both ATI and Nvidia products. I was extremely pleased with the days of the HD 7770 Black Edition (Ghz edition) cards that STILL keep up with today's newer cards in gaming application while keeping a low ~$100 price point. But I also know where the brand leads strong, and where it doesn't lead at all.
You want a good budget card that will last you a while and still keep up while you slowly dial back the quality settings in your games? Go with a cheaper more budgetable ATI card. They're affordable for a GOOD REASON.
Want the most bleeding edge visual experience for a premium dollar, in a more refined and polished end product? Go with the Nvidia card. But remember, premium has a price.
TLDR?
Z > 295x2
I have provided a lot of reasons that are solid and are true, you can google and read up on the technology behind the Kepler and (insert random island name here) architectures. Believe me, if you want an in-your-face experience of what I'm talking about, try installing any Linux distro on your system, install the base proprietary drivers that do not include any BS bells and whistles, and run any Source engine game with OpenGL, or run an UT2004 Engine based game with OpenGL. max them out. write down the frames. You will understand the difference.