AMD CPU speculation... and expert conjecture

Page 38 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid



This is true what matter what since Intel can produce much more.
 
Intel outsells AMD as part of the perpetuating problem where people are told that AMD cannot do a good job for them. Its like telling people of the neighborhood that X has a bubonic plague, nobody will want to go near them even if it is wholistically innaccurate.

The PC market are like rats on a sinking ship, they follow each other blindly without any cause or reasoning. That said I do use Intel in a specific purpose build because I need the best performance in that regard, for anything else sadly Intel doesn't cut the mustard very well.

I build a itx build; FM2A75M (Asrock the A85 varient out soon), A8 5600K, G.Skill Ares 1866 and Sugo 5 with build on 450w bronze PSU for over $225, nothing Intel have comes close
 


The Wii sold because it was the Wii.

Essentially, I call it the "Furby" effect: Everyone wants to get one because everyone wants to get one. Once they get one though, it quickly looses its attractiveness.

The lack of SW, combined with the fact the Wii WAS so successful is going to drag on Wii U sales (which you can see, based on Gamestop having them stacked right in the front of their stores, under "We have Wii U!" signs).
 


Can't really tell without an analysis; could be a pipelining issue, an issue sending data across threads, or just REALLY bad timers used to sync to game engine. The fact the GPU is getting less load though indicates either drivers or game code is at fault.
 

recent rumors suggest that gt3 won't make it to dt i7. intel seems to be making seperate die for gt3 ultrabook class cpus. additionally, they want to use the extra shaders for better power efficiency and connected standby performance, sorta like big.little with the new arm socs.

chances are nvidia gpus will require more and more cpu overhead if technologies like fxaa become more widely adopted. fxaa was essentially developed by nvidia.

amd themselves use core i7 3960 testbeds to benchmark their own radeon gpus. it says right on the graphics page lol.


amd tried to tap into the 'blind crowd' last time, with bd. it was a devious (almost as intel) yet financially disasterous failure for amd (hence the 'almost' part). :sol:

orly. :whistle:
then why doesn't amd themselves test their high end gfx cards with fx8350 (oc if needed) based bench rigs?
http://www.amd.com/us/products/desktop/graphics/7000/7970ghz/Pages/radeon-7970GHz.aspx#4
'cuz intel does better, and then some.
edit: now that i've read the above part, it sounds kinda milf-ish. so i will follow my previously mentioned tips to champion amd cpus and try....
if amd, right now, even for humoring current customers, releases (only) 1 zambezi sku with one of the shared integer units lasered off, plus clocked a couple of hundreds mhz higher with the same 125w tdp or (even better) 95w tdp, it will skyrocket amd's cpu sales. call it an fx 4000/8000 EXXtreme or FXtreme or something like that. additional thermal headroom will help hit turbo better, improve gaming performance, silence the amd fanboy(s) who troll every forum thread to educate users how to disable one of the shared cores, stop boosting phenom ii sales, get an instant recommendation in Tomshardware(tm)'s 'best gaming cpu for the money' article, achieve eteknix 'unobtainium hyper-galactic king of gaming cpu' award, make kitguru implode and so on. general public don't care about 8 cores. tell 'em it's a $120 4 core amd gaming performance cpu, stick a 'never settle' game or two with it. core i3 and core i5 (except 2500k, 3570k and higher) would be dead in the proverbial water.

AMD puts out a cheap embedded APU development board
http://semiaccurate.com/2013/01/25/amd-puts-out-a-cheap-embedded-apu-development-board/

Kabini chipset is Yangtze
http://www.fudzilla.com/home/item/30241-kabini-chipset-is-yangtze
 

guskline

Distinguished
Aug 25, 2006
431
2
18,795
Who but a blind man would not know that the 3770k would beat the 8350? In AMD's release of the 7970 they used an Intel cpu to run benchmarks! DUH!

It's a shame that Tom's didn't include a 3570k in the mix. At least on price there would be parity with the 8350. BTW I own 2 2500k rigs OC'd, an OC'd 8150 rig and an OC'd8350 rig (No IB yet!). The 8350 has narrowed the gap with the 2500k using the same video card, GTX670.

As soon as I saw the title of the article I said "how bad does the 8350 get beat this time".

I do hope this article is follow by a GTX680 SLI article using the 3770k, 3570k and 8350.
 

truegenius

Distinguished
BANNED
edit: now that i've read the above part, it sounds kinda milf-ish. so i will
follow my previously mentioned tips to champion amd cpus and try....
if amd, right now, even for humoring current customers, releases (only) 1
zambezi sku with one of the shared integer units lasered off, plus clocked a
couple of hundreds mhz higher with the same 125w tdp or (even better) 95w
tdp, it will skyrocket amd's cpu sales. call it an fx 4000/8000 EXXtreme or
FXtreme or something like that. additional thermal headroom will help hit
turbo better, improve gaming performance, silence the amd fanboy(s) who
troll every forum thread to educate users how to disable one of the shared
cores, stop boosting phenom ii sales, get an instant recommendation in
Tomshardware(tm)'s 'best gaming cpu for the money' article, achieve
eteknix 'unobtainium hyper-galactic king of gaming cpu' award, make kitguru
implode and so on. general public don't care about 8 cores. tell 'em it's a
$120 4 core amd gaming performance cpu, stick a 'never settle' game or two
with it. core i3 and core i5 (except 2500k, 3570k and higher) would be dead
in the proverbial water.

you mean they should sell 8350 4m/8c at 199$ as 4350fxe 4m/4c at 120$ , they can't afford it

:( i still want phenom/athlon 3 x4 32nm or 28 nm clocked in 3.5-4.5ghz range with turbo and no l3 cache to get more tdp and less die size so as to place them in 95-125w tdp range with price tag of around 100$

is there anyone who can calculate power consumption of this die and size of it :/ ?
(my phenom 1090t can hit 4ghz on 4 cores with 1.45v and with cheap 4+1 phase board, so phenom3 seems possible in 3.5-4.5ghz range with good tdp)
 

the shared-integer-unit-lasered-off, 4 module/4 core (unlike fx41/43xx's 2m/4c) should at least perform similar to fx61/63xx cpus, if not much better. amd's 4 core, 6 core cpus and apus are crowded in the same price range. if the 'fxtreme' is marketed as a 'promised performance' part, it can gain foothold there, sales figures suggest(the ones that i've seen) that amd's 4 core cpus and apus sell in similar numbers, way more than 6c+ cpus. intel's core i3 will have a very strong contender to deal with. fx8350 is only that cheap in u.s., not everywhere (same with amd mobos). moreover, i only see amd's cpu+mobo deals on u.s. sites and may be on some u.k. or canada sites too, but that's pretty much it. amd has a much better chance to sell 4 cores in emerging markets.

i think llano athlons were practically 32nm athlons(like rana and others). i don't understand design/architecture that well, to me they looked quite similar. i'm pretty sure there will be kaveri (28nm)athlons but i doubt they will be widely marketed. has anyone seen any trinity athlon ii x4 750/751k in the wild?

edit:
seems like amd's claimed performance 'improvements' with richland will come partly from having higher spec memory support (ddr3 2133 ram) more than trinity's 1866 support. imo the igpu badly needs higher memory bw.

bd has a new home... in console dev. kits
http://www.xbitlabs.com/news/multimedia/display/20130124235159_Specifications_of_Sony_PlayStation_4_Development_Kit_Get_Published.html
 
seems like amd's claimed performance 'improvements' with richland will come partly from having higher spec memory support (ddr3 2133 ram) more than trinity's 1866 support. imo the igpu badly needs higher memory bw.

And I continue to worry about the increased latency that results. Theres a reason why GDDR RAM is high speed, low latency...
 


There are new Athlon II X4 on the FM2 socket with no iGPU's, should be on Toms review soon, interesting sub $90 varients.

orly. :whistle:
then why doesn't amd themselves test their high end gfx cards with fx8350 (oc if needed) based bench rigs?
http://www.amd.com/us/products/des [...] GHz.aspx#4
'cuz intel does better, and then some.
edit: now that i've read the above part, it sounds kinda milf-ish. so i will follow my previously mentioned tips to champion amd cpus and try....
if amd, right now, even for humoring current customers, releases (only) 1 zambezi sku with one of the shared integer units lasered off, plus clocked a couple of hundreds mhz higher with the same 125w tdp or (even better) 95w tdp, it will skyrocket amd's cpu sales. call it an fx 4000/8000 EXXtreme or FXtreme or something like that. additional thermal headroom will help hit turbo better, improve gaming performance, silence the amd fanboy(s) who troll every forum thread to educate users how to disable one of the shared cores, stop boosting phenom ii sales, get an instant recommendation in Tomshardware(tm)'s 'best gaming cpu for the money' article, achieve eteknix 'unobtainium hyper-galactic king of gaming cpu' award, make kitguru implode and so on. general public don't care about 8 cores. tell 'em it's a $120 4 core amd gaming performance cpu, stick a 'never settle' game or two with it. core i3 and core i5 (except 2500k, 3570k and higher) would be dead in the proverbial water.

I kinda get what you are trying to get at but ultimately every roadmap has its layouts and AMD have elected for 4/6 and 8 Module varients, each have their particular necessities over the others, all of the Vishera parts are copious gamers and even if we take out the Intel arguement, they all still hold their own at the price tag too. Zambezi to Vishera was for the most a step in the right direction, albeit now the Bulldozer Arch is dead we can only speculate as to what the Steamroller Arch will offer, lets hypothetically say around Sandy performance, overall wouldn't that be a very tangible offering?

Right now AMD's massive push is APU's, with excavator possibly a unified socket with mainstream level iGPU perhaps its what AMD are pushing harder. With HD4000 still languishing around low level Llano parts and dual module Trinity parts and GT3 from what I gathered expected to still be some considerable margin behind Trinity with more potent Richland parts out later in the year, AMD has tremendous leverage in the integrated graphics that and HSA is AMDs bread and butter and its a very good horse to back as we move forward. AMD have already themselves said that their iGPU's are still basically in infancy beta phase, expect them to improve considerably over the next two Architectures.

 

if the current module-based arch gets tweaked and a 28nm shrink and achieves sandy bridge-level single core perf within 95w tdp, it'll be good - in amd terms. if the sr arch turns out to be something drastically different which gets sb-level perf at 95w without binning, it'd be better imo. i'd still like a bit less tdp for better power efficiency at 28nm in case amd has global foundries make the cpus.
although right now jaguar cores seem to be amd's future flagship. the prospect of selling the chips for the new consoles seems very lucrative for amd. this will get them considerable amount of revenue if the deal is made right. it'll drive hsa (if it's 'productized' in the new consoles) as well.
 
Cant be bothered, and since Intel doesnt really have gfx, I wonder if Intel ever showed the K8 for its prowess in gaming?
Probably too proud, yet deceiving, to do so

Point here is, Intel really doesnt have much stake in gfx, no discrete etc.
AMD has to show its gfx in the best light, which is Intel cpu driven.

The 20% revenues for AMD will be SoC or similar style chips, and likely to be made at TSMC.
GF is still a slow bleed in
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


I think one thing that was overlooked on the AT comparison was the imbedded memory that was on that gt3 test. That a $500+ chip.
 

truegenius

Distinguished
BANNED
i think llano athlons were practically 32nm athlons(like rana and others). i don't understand design/architecture that well, to me they looked quite similar. i'm pretty sure there will be kaveri (28nm)athlons but i doubt they will be widely marketed. has anyone seen any trinity athlon ii x4 750/751k in the wild?
There are new Athlon II X4 on the FM2 socket with no iGPU's, should be on Toms review soon, interesting sub $90 varients.

i mean real k10 and not apu with defects :(
even k10 based fm1 llano/athlon are bad performers in terms of power consumption and overclocking capability
old 45nm based phenom 2 can attain 2.8ghz clockrate in under 95w tdp which can be clocked to 4+ghz easily so why not in case of llano (eg athlon II x4 641's sweet spot is 3.5ghz only):(

amd fm2 setup costs too much (around $250+ just for good board and a10-5800k) in my country
which is similar in price to an i3-3220+b75+hd6670 :/ build
though am3+ boards are cheap any in many variety
 


I am not sure what you mean, the Athlon II x 4 FM2 parts are half the power even less than that of the 100w modules without the iGPU, that is a quad running of 60W and less.

Sadly the pricing of A-series in some countries is out of whack, though where I live the 3220 is more expensive than a 5800K, a 5800k with dual graphics will still out game the 3220+6670 which is something to consider as well.
 

ah. then those wouldn't have new instruction sets including some of amd's own. imo amd's main reason is the shift of focus towards mobile products. pre-llano/bobcat amd mobile cpus were crap imo (others may disagree). with piledriver, they could put in a lower clocked cpu, a very powerful igpu and cut cost by shedding fpu resources and l3 cache and gain some battery life. sometimes i think piledriver is just a 'mobile' version of bulldozer.

exactly. although, i dislike b75 chipset's wide availability. i blame them for launch-blocking z75 motherboards. :p one can further skew into intel's favor by choosing matx mobos, tray/open box chips.
in the end, prices are different in different locations.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
John Carmac says :

wrote the following (slightly redacted) up a little while ago for another company looking at consumer level ray tracing hardware as it relates to games. I do think workstation applications are the correct entry point for ray tracing acceleration, rather than games, so the same level of pessimism might not be apropriate. I have no details on Imagination’s particular technology (feel free to send me some, guys!).

------------

The primary advantages of ray tracing over rasterization are:

Accurate shadows, without explicit sizing of shadow buffer resolutions or massive stencil volume overdraw. With reasonable area light source bundles for softening, this is the most useful and attainable near-term goal.

Accurate reflections without environment maps or subview rendering. This benefit is tempered by the fact that it is only practical at real time speeds for mirror-like surfaces. Slightly glossy surfaces require a bare minimum of 16 secondary rays to look decent, and even mirror surfaces alias badly in larger scenes with bump mapping. Rasterization approximations are inaccurate, but mip map based filtering greatly reduces aliasing, which is usually more important. I was very disappointed when this sunk in for me during my research – I had thought that there might be a place for a high end “ray traced reflections” option in upcoming games, but it requires a huge number of rays for it to actually be a positive feature.

Some other “advantages” that are often touted for ray tracing are not really benefits:

Accurate refraction. This won’t make a difference to anyone building an application.

Global illumination. This requires BILLIONS of rays per second to approach usability. Trying to do it with a handful of tests per pixel just results in a noisy mess.

Because ray tracing involves a log2 scale of the number of primitives, while rasterization is linear, it appears that highly complex scenes will render faster with ray tracing, but it turns out that the constant factors are so different that no dataset that fits in memory actually crosses the time order threshold.

Classic Whitted ray tracing is significantly inferior to modern rasterization engines for the vast majority of scenes that people care about. Only when two orders of magnitude more rays are cast to provide soft shadows, glossy reflections, and global illumination does the quality commonly associated with “ray tracing” become apparent. For example, all surfaces that are shaded with interpolated normal will have an unnatural shadow discontinuity at the silhouette edges with single shadow ray traces. This is most noticeable on animating characters, but also visible on things like pipes. A typical solution if the shadows can’t be filtered better is to make the characters “no self shadow” with additional flags in the datasets. There are lots of things like this that require little tweaks in places that won’t be very accessible with the proposed architecture.

The huge disadvantage is the requirement to maintain acceleration structures, which are costly to create and more than double the memory footprint. The tradeoffs that get made for faster build time can have significant costs in the delivered ray tracing time versus fully optimized acceleration structures. For any game that is not grossly GPU bound, a ray tracing chip will be a decelerator, due to the additional cost of maintaining dynamic accelerator structures.

Rasterization is a tiny part of the work that a GPU does. The texture sampling, shader program invocation, blending, etc, would all have to be duplicated on a ray tracing part as well. Primary ray tracing can give an overdraw factor of 1.0, but hierarchical depth buffers in rasterization based systems already deliver very good overdraw rejection in modern game engines. Contrary to some popular beliefs, most of the rendering work is not done to be “realistic”, but to be artistic or stylish.

I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures.

http://arstechnica.com/gadgets/2013/01/shedding-some-realistic-light-on-imaginations-real-time-ray-tracing-card/?comments=1&post=23723213#comment-23723213
 

jdwii

Splendid



Again once Nintendo gets their titles out the door The system will sale better. Its already doing better then the 360+PS3 at launch time, just wont be as big of a hit as the Wii.

 

cgner

Honorable
Aug 26, 2012
461
0
10,810


um.... no. Like people mentioned here many times, AMD chips are inferior because of many threads. Vast majority of games does not use more than 2 cores. Even not so old titles like Dawn of War 2 show major FPS drop to 30-35 on 1100T @ 4ghz, when there is a battle on the screen because they are single threaded.
 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790

Your reasoning is lacking logic. Having more threads does not make the threads that are used slower. If a game uses 2 cores and only 2 cores, it will perform almost exactly the same on a dual-core i3, quad-core i5, and hexa-core i7. AMD does not do as well in *Edit: SOME *games due to how powerful each thread is. Also, I imagine there are very few people who would play a game at 30 fps (minimum, mind you) without seeing that it is 30 fps and call the game "unplayable" because of frame rates.
 
Status
Not open for further replies.