Our benchmark results have long shown that ATI's graphics architectures are more dependent on a strong processor than Nvidia's. As a result, we usually arm our test beds with high-end Intel CPUs when it comes time to benchmark high-end GPUs, sidestepping platform issues that might adversely affect results designed to isolate graphics performance.
edit: now that i've read the above part, it sounds kinda milf-ish. so i will
follow my previously mentioned tips to champion amd cpus and try....
if amd, right now, even for humoring current customers, releases (only) 1
zambezi sku with one of the shared integer units lasered off, plus clocked a
couple of hundreds mhz higher with the same 125w tdp or (even better) 95w
tdp, it will skyrocket amd's cpu sales. call it an fx 4000/8000 EXXtreme or
FXtreme or something like that. additional thermal headroom will help hit
turbo better, improve gaming performance, silence the amd fanboy(s) who
troll every forum thread to educate users how to disable one of the shared
cores, stop boosting phenom ii sales, get an instant recommendation in
Tomshardware(tm)'s 'best gaming cpu for the money' article, achieve
eteknix 'unobtainium hyper-galactic king of gaming cpu' award, make kitguru
implode and so on. general public don't care about 8 cores. tell 'em it's a
$120 4 core amd gaming performance cpu, stick a 'never settle' game or two
with it. core i3 and core i5 (except 2500k, 3570k and higher) would be dead
in the proverbial water.
seems like amd's claimed performance 'improvements' with richland will come partly from having higher spec memory support (ddr3 2133 ram) more than trinity's 1866 support. imo the igpu badly needs higher memory bw.
orly.
then why doesn't amd themselves test their high end gfx cards with fx8350 (oc if needed) based bench rigs?
http://www.amd.com/us/products/des [...] GHz.aspx#4
'cuz intel does better, and then some.
edit: now that i've read the above part, it sounds kinda milf-ish. so i will follow my previously mentioned tips to champion amd cpus and try....
if amd, right now, even for humoring current customers, releases (only) 1 zambezi sku with one of the shared integer units lasered off, plus clocked a couple of hundreds mhz higher with the same 125w tdp or (even better) 95w tdp, it will skyrocket amd's cpu sales. call it an fx 4000/8000 EXXtreme or FXtreme or something like that. additional thermal headroom will help hit turbo better, improve gaming performance, silence the amd fanboy(s) who troll every forum thread to educate users how to disable one of the shared cores, stop boosting phenom ii sales, get an instant recommendation in Tomshardware(tm)'s 'best gaming cpu for the money' article, achieve eteknix 'unobtainium hyper-galactic king of gaming cpu' award, make kitguru implode and so on. general public don't care about 8 cores. tell 'em it's a $120 4 core amd gaming performance cpu, stick a 'never settle' game or two with it. core i3 and core i5 (except 2500k, 3570k and higher) would be dead in the proverbial water.
Our benchmark results have long shown that ATI's graphics architectures are more dependent on a strong processor than Nvidia's. As a result, we usually arm our test beds with high-end Intel CPUs when it comes time to benchmark high-end GPUs, sidestepping platform issues that might adversely affect results designed to isolate graphics performance.
i think llano athlons were practically 32nm athlons(like rana and others). i don't understand design/architecture that well, to me they looked quite similar. i'm pretty sure there will be kaveri (28nm)athlons but i doubt they will be widely marketed. has anyone seen any trinity athlon ii x4 750/751k in the wild?
There are new Athlon II X4 on the FM2 socket with no iGPU's, should be on Toms review soon, interesting sub $90 varients.
i think llano athlons were practically 32nm athlons(like rana and others). i don't understand design/architecture that well, to me they looked quite similar. i'm pretty sure there will be kaveri (28nm)athlons but i doubt they will be widely marketed. has anyone seen any trinity athlon ii x4 750/751k in the wild?
There are new Athlon II X4 on the FM2 socket with no iGPU's, should be on Toms review soon, interesting sub $90 varients.
wrote the following (slightly redacted) up a little while ago for another company looking at consumer level ray tracing hardware as it relates to games. I do think workstation applications are the correct entry point for ray tracing acceleration, rather than games, so the same level of pessimism might not be apropriate. I have no details on Imagination’s particular technology (feel free to send me some, guys!).
------------
The primary advantages of ray tracing over rasterization are:
Accurate shadows, without explicit sizing of shadow buffer resolutions or massive stencil volume overdraw. With reasonable area light source bundles for softening, this is the most useful and attainable near-term goal.
Accurate reflections without environment maps or subview rendering. This benefit is tempered by the fact that it is only practical at real time speeds for mirror-like surfaces. Slightly glossy surfaces require a bare minimum of 16 secondary rays to look decent, and even mirror surfaces alias badly in larger scenes with bump mapping. Rasterization approximations are inaccurate, but mip map based filtering greatly reduces aliasing, which is usually more important. I was very disappointed when this sunk in for me during my research – I had thought that there might be a place for a high end “ray traced reflections” option in upcoming games, but it requires a huge number of rays for it to actually be a positive feature.
Some other “advantages” that are often touted for ray tracing are not really benefits:
Accurate refraction. This won’t make a difference to anyone building an application.
Global illumination. This requires BILLIONS of rays per second to approach usability. Trying to do it with a handful of tests per pixel just results in a noisy mess.
Because ray tracing involves a log2 scale of the number of primitives, while rasterization is linear, it appears that highly complex scenes will render faster with ray tracing, but it turns out that the constant factors are so different that no dataset that fits in memory actually crosses the time order threshold.
Classic Whitted ray tracing is significantly inferior to modern rasterization engines for the vast majority of scenes that people care about. Only when two orders of magnitude more rays are cast to provide soft shadows, glossy reflections, and global illumination does the quality commonly associated with “ray tracing” become apparent. For example, all surfaces that are shaded with interpolated normal will have an unnatural shadow discontinuity at the silhouette edges with single shadow ray traces. This is most noticeable on animating characters, but also visible on things like pipes. A typical solution if the shadows can’t be filtered better is to make the characters “no self shadow” with additional flags in the datasets. There are lots of things like this that require little tweaks in places that won’t be very accessible with the proposed architecture.
The huge disadvantage is the requirement to maintain acceleration structures, which are costly to create and more than double the memory footprint. The tradeoffs that get made for faster build time can have significant costs in the delivered ray tracing time versus fully optimized acceleration structures. For any game that is not grossly GPU bound, a ray tracing chip will be a decelerator, due to the additional cost of maintaining dynamic accelerator structures.
Rasterization is a tiny part of the work that a GPU does. The texture sampling, shader program invocation, blending, etc, would all have to be duplicated on a ray tracing part as well. Primary ray tracing can give an overdraw factor of 1.0, but hierarchical depth buffers in rasterization based systems already deliver very good overdraw rejection in modern game engines. Contrary to some popular beliefs, most of the rendering work is not done to be “realistic”, but to be artistic or stylish.
I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures.