warezme :
I wouldn't buy it but wait until they release all 512 of its cores and shrink into a smaller cooler package. Then ATI will be in big trouble.
"Big trouble?" You act like AMD can't make a refresh itself. There's also a bit of a problem with relying on a die shrink... 40nm is still pretty new, and that's what Fermi's based on. As I'd mentioned in prior GF100 articles, we're looking at a MINIMUM of 6 months before a die shrink can happen. So with the card just freshly out, we're stuck waiting until November, if not later. Given that the main flagship 5870 came out in September, that's already a 6-7 month gap; by the time a refresh'd Fermi comes out, it'll have to face against at LEAST a refreshed RV800, if not RV900, which is still implied to be due out by the end of this year.
In that case, I have my doubts that AMD would have much to fear from a die-shrunk GF100; who'd care about a 512-core GF100 running at a higher clock rate, when AMD will have their own 5890/5990 in return, or a 3,200-SP 6870 and 6,400-SP 6970? And it's not like nVidia can release a refresh of real consequence BEFORE the die shrink, since yields are so abysmally low, and they're ALREADY having high clock rate misses.
To be honest, nVidia's problem is that they should've scrapped Fermi months ago and come up with a whole new design. Instead, they wound up painting themselves into a corner here.
spoofedpacket :
Why would he even be worried about the $50/ea price difference in cards or a motherboard if looking at such a solution to begin with? Most of these comments about price hit me like someone saying "I'd buy that Ferrari but I can't afford to change the oil, so I don't know what to do!"
That's actually a flawed set of analogies. For a lot of the comments on prices here, it's more like going "I know I can afford it, but why would I spend $1,000,000US on an Enzo Ferrari, when I can spend $650k on an SSC Ultimate Aero, which is faster?" Of course, Ferari has more fans, in spite of their car not being the fastest.
knowom :
Yeah a very inventive matrox/SoftTH rip off. Triple monitor setups are niche for a reason it's expensive and unattractive due to bezels.
Really it's nothing, but a temporary solution until monitor makes come up with a way to make bigger and higher resolution monitors more affordable.
First off, it's not a true rip-off; EyeFinity is not just "more than 2 monitors a card," as that's something nVidia has done with their Quadro NVS cards for years. Rather, it's the prospect of arbitrarily mapping monitors to different parts of the same display, specifically for gaming; while Matrox was able to do this for simple desktop tasks for years, they really never touched gaming with this. This is truly something great.
To be honest, it's INCREDIBLY doubtful that huge, many-megapixel displays will ever be "affordable," or even really practical. Screen prices increase logarithmically as they scale up in size; this is because the panels have to be monolithic, and they suffer from size-related yield problems in a fashion not entirely unlike GPU dies. This is further compounded by the issue of market size, which affects costs through economy of scale; even 2560x1600 monitors are bought relatively often, while 1920x1200/1080 ones are very common, making them far cheaper, even in sets of three, than what even a modest 3840x2160 monitor could ever hope to be.
The other part is that it'd still use a bunch of monitor cables; Even DisplayPort 1.2, the bleeding edge of monitor connection technology, is limited to around 17.2 gigabits per second. If you're working at a 60Hz refresh rate and 24-bit color, that means you have a cap of ~12 megapixels per cable. That means that while EyeFinity 3 has been shown to hit well over 12 megapixels, the same with a single monitor would STILL require a complex multi-cable solution, making sure said monitor remained expensive.