mactronix :
As i said earlier in the thread as it seems that the card is basically as far as the main archetecture goes a 40nm 4890 then it should be faster, certainly than its is if not faster than the 4890 or does die shrinking not yeild improvements anymore ? People keep saying the 128 bit bus isnt a factor as far as its not restricting the card and as this is the only major differance going by the chart i posted i feel the comparison is fair.
Except that that chart is only part of the difference. And a die shrink yields nothing, and a process shrink only yield speed improvements if the speed is increased. The options usually from a process shrink (as long as it's not a bum process like the 80nmH of the R600) are higher speed or more efficient operation and higher yields, or a balance of the two. In this case it seems rather obvious that ATi/AMD was looking for efficiency and yields rather than simply increasing the clock speeds. As for the chart, it's all well and good to simply produce the # x X @ Y freq and say they're the same, but there's other improvements going on, which you should already be aware of simply from the HD2K->HD4K improvements or ATi vs nV differences, where specs alone are not as handy as actual testing of what's under the hood. Look at ground level testing, it's holding it's own for a new design that uses about 1/2 the die space. The tests at the Tech Report shoudl give you an idea immediately that you chart means very little as the HD5770 and HD4890 don't mirror each other, and the HD5770 destroys the HD4890 in the shader intensive Perlin Noise test;
http://techreport.com/articles.x/17747/5
Looking through that review the card fits in well above where I'd expect it as something that's essentially a x600 part, but to most it's coming in below their elevated expectations.
It's still early to expect maximum utility out of a new part, so instead of looking at just game performance when criticizing a design (which is what you guys are doing more than price [oh they shouldn't have used 128bit, etc]). It might even be held back by the bus width, but I still don't see what the alternative is as we've already discussed that an odd bitwidth was not practical, and a 256bit memory path would simply increase die size, lower yields and increase board cost/complexity. Remember it's still got more bandwidth than the 256bit HD4850, which is still nice, and I think more people would be impressed with that feat if that little surprise wasn't already brought forth with the HD4770. For what the part is supposed to be, it does the job well despite there being better price competitors out there. It's essentially like every generation out there, when it first launches, everyone compares it to 2x the fire-saled old cards. The G80 sucks!! I can get 2 or even 3 X1950Pros for less and Xfire them together and outperform it.... Price is never the friend of new hardware.
And DX11 isnt relevant to why the card performs as it does.
Just as relevant as 128bit memory, because it's a design choice and a price choice, and since none of you guys provide a single shred of evidence that it is the bus's fault, and not immature drivers or unoptimized scheduler, etc. then it is relevant for both the pricing and the design considerations, and for the discussion that seems to be the focus and something where 1-2fps loss to the HD4870 seems to people to be a defeat worth noting, but not the additional features. People equally criticized the HD5870 not having a 512bit bus (or an odd # as if they were randomly generated) because theoretically that could be a weak point, but it's weak compared to what? Why didn't they put in 2000 SPUs? Why not double the ROPs, etc. And IMO even then whatever came out of the foundry someone was going to say, of well, it's too bad they didn't double that feature, because THEN it would be Awesome.
Now i understand that its a new series card and you would know more than most here so let me ask you this, Is this Archetecture and the DX11 feature parts of it really causing issues here as far as the drivers go ? Because most of the reviews i have seen are saying that the main of it is still the same as the 4 series, now if its true and the guts of it are the same then i fail to see how they dont have a decent set of drivers to run it. or is it the case that there are a lot of small differances that the sites are just not going into ?
As you can see in the raw tests, it's a combination of both. I don't doubt that the HD5770 can use more memory, the HD5870 would be interesting with a 512bit memory interface too, but it's a question of what can you design for. And while the HD5K series shares alot of similarities in it's base parts, the sum of those parts are different, both from the setup to the way the RBE is redesigned. The changes to the LDS and GDS and RBE Cache are things that AMD's software guys will take a while to get used to having and exploiting, the changes in the Assembler and Dispatcher are supposed to make things more efficient, but they still need to be optimized because they rely on software to handle thread execution and load balancing. I have a feeling the dispatcher's micro-code start with what works best for the HD5870, and even that would be their expectation of what works well for the HD5870 with only limited experience with the new arrangement of the components, especially the addition of the Hull and Domain shader which changes the load balancing equation from the start. I think it will take a few months to get efficiency from it, and yet we're comparing the optimized HD48xx to a new HD5770 as if they would be on par for efficiency right out of the box.
Even if it were simply HD4K x 1.x it would require re-working to fix, just like how the HD4K series is 'essentially X times more of an HD3K, it's not as simple as that and we didn't see performance improvements until a few months in, and then the biggest improvement about 9month later.
I still want to see the comparison of the HD5770 vs 48xx in a wider variety of tests like Rightmark, so that it's easier to pick out the strengths and weaknesses. Looking at early performance in games is not a good way to judge the value of a design, because too many influences are involved.
It'll be interesting to see how it matures, the thing that does bother me but is likely of no concern to most people (especially value gamers) is the loss of the DP calculations, which kinda ruins my hopes for a nice balanced mobile part if Juniper is to be the primary mobile platform. It shouldn't matter much to most people out there, but I wanted it for application testing and design, so to me that's a bigger loss than the 128bit memory bus, which would of course be nicer to see larger, but not practical to implement, I just don't know what the transistor budget would've done to die space and whether or not it had to go or else other things had to go (or else make a geometrically bigger part).
I think we wouldn't be having this discussion if the HD5770 launched at 50mhz faster, and $50 cheaper, but to me that's more about hitting a fps/$ mark on some graph, and not an important factor in design choices, which is where the 128bit but would fit in.