5870x2?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
but i tihnk the difference might be that NVIDIA is going for a completely new architecture while ATI admits their RV870 is basically a DX11 'tweaked' RV770.

here's to hoping GT300s wipes the floor with ATIs cards.

here's some specs for the GT300 originally from a German site. 1.8 billion transistors. the bottom line performance for the GT300 should be around 3 TFLOPs (3x faster than its equivalent GTX 285).. .thats assuming a natural performance increase when moving to more shaders, higher clocks and less area. if there are massive architectural changes this value could vary either little or dramatically, but it remains to be seen if that happens, and if so by how much.

EDIT: oops my bad. those specs were for the planned GT212.. DOH!

The GT300 is expected to have around 2.8billion transistors and roughly the same die-size.
 
Thing is, read some of their patents. It appears theyre heading into major DP,gpgpu solutions, while also trying to dedicate some of those extra transistors for regular gaming usage.
So, how much is dedicated to which, we just dont know
 



at these speeds PCB traces cant be designed "however you want". That ship sailed about 900 MHz ago. At these frequencies you can't really design anything from a purely digital perspective. The rise time of signals compared to the propagation distance and the speed of propagation when talking about gigahertz+ frequencies cannot be ignored. As such even a simple PCB interconnect of something like 0.5cm starts to look like a transmission line rather than a simple interconnect. *and in any transmission line length/impedances start to play a major role here and thats going to be the same for both ATI and NVIDIA.
 
I see what you mean by that.
but....
1) it simplifies layout and routing. JEDEC typically prescribes no more than 0.5" interconnect for even relatively low frequency DRAM. but NVidia has already moved to GDDR5 in its current generation cards and this is something that applies to both companies. However ATI may be moving on to xdr2 from IIRC.
2) its not a general rule that you can ignore PCB design as this is specfic to the memory interface. interconnects between other components still need to be designed according to transmission line lengths and terminating impedances, and itg applies to both the red and the green cards.



 


i tihnk you're referring to signal phase adjustment preventing reflection if the line isn't properly terminated. yeah, but again it applies to the GT300 just as much as it does ATI's cards.
 
I was refering to the on their heels currently statement, which was implying ATIs lower prices, which I countered with the silicon size, yields and this, GDDR5 usage, or more correctly, back foot.
I see we see things differently on how the current gen ran its course, thats ok
 
arguably yes. but IIRC the biggest manufacturing cost advantage of GDDR5 is allowing the same through put while requiring a narrower bus (I haven't actually done calculations on volumes and seen where the biggest cost reduction factors are, but this is what i remember reading somewhere).

however, nvidia has the performance crown AND the market share,... I dont know a good deal about economics or business so correct me if i'm wrong, but from what I know this allows NVIDIA to dictate pricing (especially upperbound), forcing ATI to compete in terms of cost/performance rather than through the performance lead.... similar to how AMD competes with Intel.

and so ATI is forced to compete by being cost-effective regardless of whatever lowered manufacturing costs GDDR5 may bring with it.

this sure seems to me like ATI IS on the backfoot. i am not one inclined to think any company is benevolent and selfless. Right now, NVIDIA, Intel, Microsoft - these are the profiteering gluttons, they're the ones who have the marketshare and lead so they seem to be charging customers a pretty penny for their wares. but i bet if AMD/ATI were in the lead, they'd do the same to us and try to screw us over by charging us the maximum possible to maximise their profits.


 
and i might add.... this MIGHT change. if ATI manages to get DX11 chips out where tehre is no competition from NVIDIA for a few months, i'm going to guess they'll charge us left and right and try to make as much money as possible and gain market share while NVIDIA is still on its back trying to get up.
 
The performance crown has shifted back and forth. It was originally held for 10 weeks by nVidia, then shifted back to ATI for almost 5 months, then bacl again with the 295.
The perf difference wasnt seen as great enough for nVidia to hold its price point, and they were the ones to have to lower theirs, not the other way around.
And, having the single core halo product hurt them at first by its high pricing, the mentioned rebates to early buyers, but also, the gaps closed even more with the 4890 vs the 285, so in reality, having a much more expensive product, thats not seen as an overly win in perf has hurt them
 
yeah you're probably right about that. i cant really contest that. i've been on leave of absence from the mainstream Personal Computers and gaming scene for about 2-3 years now, and only getting back into it quite recently.

 
Welcome back.
nVidia has a few image problems, at least in the enthusiast area. Theyve renamed products, the high cost, or pricing. Left a bad taste in quite a few peoples mouth.
Also, as mentioned, the transistor density seen by ATI is unparalleled currently, and the other designs etc.
They shifted focus (and is right of them to do so) from strictly a gpu gaming approach, and have branched out into physx and gpgpu usage, and currently lead in that ability, but its seen as proprietary, and hasnt had full take up by potential markets, but has done well in the higher profit areas like science and engineering.
The reason its good for them is that chipsets are dying, as both AMD and Intel have the MCM in die, and even the IGP is soon to be extinct in many levels of cpu models, as it too goes on/in die.
 
yeah thanks.

yes CUDA is something i've wanted to get into for a while now, but havent had the time so far. looks pretty slick. back when i was in uni i did my project on DT and CT CNNs and possible FPGA/ASIC implementations. hadnt even heard of CUDA at the time (as a result of being out of the PCs loop), or might have done something CUDA related.

but now it seems NVIDIA is shifting to cGPUs with the next gen, and there doesnt seem to be much reliable info on the place CUDA will take up with cGPUs.
either way, it seems open standards like OpenCL will win out in the end. or even standards not necessarily open, but available for support by other companies like compute shaders in DX11.

NVIDIA does shoot itself in the foot in some ways I guess. like licensing costs for SLI -- from what I've heard, its the main reason why low cost x58 boards dont have SLI supportwithout cross flashing BIOS (P6T SE) or BIOS upgrades (MSI x58M) etc. to cut down SLI licensing costs and lower costs further.,
kinda lame.

but for the foreseeable future i see myself sticking with NVIDIA, despite all the weirdness of some of the stuff they pull.

edit: but like i said before, I'm not a fanboi of either company 😛 ... stickin with whatever company out of specific reasons rather than just an intangible preference.
 
I think CUDA is done in a proper proprietary way, and has alot of life to it yet. Especially in the science fields. Theyre way ahead of most everything, and like anything else, SW written for particular HW will/should outperform a more generalized approach, as in OpenCl, or DX11 even, tho the DX aims are mostly gaming, it too will have impact in cgpu and or gpgpu.
Theyre really working hard on load balancing, and as each gen goes towards this, both from the cpu side and the gpu side, itll mesh, and having either CUDA, OpenCl and or DX11 being able to "read" your HW and SW, can pre arrange this balance
 
yeah I agree that propietary optimisations can tend to be better. But during standardisation, there will probably be a drive to homogenise both archictures while adding extensions to the specifications to support differences in architecture (at least as non-standard extensions), and quite a lit of optimisation specific to the hardware will likely go into the compiler design. at any rate, if ur sure CUDA is here to stay... for a while at least think i'll give it a whirl then :) no harm.

Correct me if i am wrong but I am sure resistance is equal throughout a material, the more material the more resistance has to be overcome.

Jaydee, are you maybe thinking of the corresponding interface connections rather than the mem tech itself?

not to get back to that argument, but I just wanted to add here as i didnt see this before, -- sorry, was a bit busy and too lazy to read everything :) .

It's not 'overcoming' the resistivity of the material thats an issue here. In transmission lines there is the issue of terminating impedances. (impedance is resistance + reactance which is what we deal with in signal systems, as there is an imaginary component and the impedance against current flow is not purely resistive. parasitic capacitance and inductance FTL!).
i dont know a simple way to explain this, but here goes... when a signal is being propagated from a source to al oad the "looking in impedances" have to be calculated and balanced. if these terminating impedances on either side (source and load) are not balanced a signal will go from source to load and instead of getitng 'absorbed' it will reflect back, return to the source, then reflect again, go to the load and reflect again. this reflection will keep on happening resulting in noise and oscillations and basically result in a mangled signal. typically these calculations are made with any connection of significant length. for example engineers who designed the cable that was fixed between your monitor and graphics card or a TV and the DVD player, have to design it according to these guidelines -- i.e. it is modeled as a 'transmission line'. the same goes for those power engineers who runs miles of cable to carry power to your home. each cable is one massive transmission line.

Now this normally doesnt apply to digital systems at low frequencies --- i.e. below 20MHz it doesnt matter much at all. from 20MHz to several GHz it starts to increase in importance very rapidly. you can make the assumption that a signal is instantly available at the other side as perfectly as it was when it left. but in high speed digital electronics, your perfect digital signals start to look less perfect and 'digital'. they start to look more fuzzy like the analog signals that they really are. so the convenient 'digital electronics' model starts to break down and gives way requiring analysis in terms of analogue electronics, circuit theory, OR (heaven forbid!) electromagnetics. While electromagnetics does play a role in very high speed computer systems (and maxwell's electromagnetics is the foundation on which rests all of electronics, digital, analog, radio, circuit theory and all that), in many cases its effects can be ignored and avoided, and circuit theory abstractions can be used instead.

i forget the rule of thumb as I have not done this high-speed digital design crap in quite a while, so correct me if i'm wrong:

If,
rise time of the signal x 1/6 > electrical length (yeah i think that should be correct.)

and electrical length = physical length/velocity of signal propagation

then you can no longer model that 'interconnect' as a simple interconnect and it becomes a 'transmission line'. i.e. analogue electronics and circuit theory abstractions begin to take precedence over 'digital electronics' abstractions, to put it in a nutshell. e.g. as i mentioned before JEDEC specification for DRAM asks interconnects to be less than 0.5" or so (IIRC) so that it can still be considered an interconnect and not a transmission line. if you go above specification, you'll have to worry about terminating impedances (it's not correct to call them resistances as there is a significant reactive component (with the AC current signals flowing through those computer components) which is just as important as the resistive component).
 
True, I didnt mention impedance, but I too didnt want to reword everything unless I had to, as I knew youd get my gist. Its nice to see GDDR5 strut its stuff. It has other abilities also, if you havnt read up.
I do tend to not go too heavy into things, as Im generally a layman, but with good ears and decent perception. Most arent here, and it doesnt sync in heheh, so I take short cuts, as example, resistance is easier to understand than impedance
 
yes most assuredly :) resistance is quite simple to understand but almost useless in systems taht deal with signals as the currents are AC not DC and there are parasitic capacitances/inductances. but i wanted to clear up what 'strangesstranger' talked about "overcoming resistance" or some such thing. its misleading. i didnt see it before.

at any rate, i think my explanation was somewhat clear at least, despite all the typos 😛 ... luckily understanding the concepts: transmission lines, impedances and etc is a lot easier than actually having to do analysis for even a simple problem like designing a cable to connect TV to a DVD player :)

i've been out of this crap for far too long. but discussing stuff with others back and forth on a forum like this etc makes u start thinking of stuff again. i need to get back on that horse LOL
 
Will be interesting to see just how far ATI multi GPU scaling can go. Design a relatively small GPU and just keep linking them.. even in one GPU package.. would provide a very cost effective way to reach some crazy performance.. Time will tell.
 
i think they said thats what their strategy would eventually become, just add more and more gpus on the same card, lets just hope the tdps stay within boundries