AMD Piledriver rumours ... and expert conjecture

Page 262 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 


Lol at price/performance? for 22 grand you can get a V6 mustang from ford! anyway if you want to talk more about this you are more then welcome to private message me here then we can discuss!
 


There's a meme for "V6 Mustang", you know? 😛

And I disagree, USA cars are not bad by definition, they're just "basic" in terms of tech and "cheap" in terms of build (with a few exceptions). Big engine produces a big output and they call it a day. When we put efficiency into the equation, the USA starts lagging a little, but the V8 used in the GT500 is quite a nice piece of tech if you ask me. Although I agree with cgner that the USA car companies actually fail hard when trying to do something that is not big engine (V6 or V8) and battle efficiency from EU or Japan. The Chevy Volt is the closest to America's finest piece of car tech out there and it seems it's doing horrible inside the States. Here in California I've only seen 1 so far and I thought it would be a lot more, but I do see a shitte ton of Priuses... It's really sad, actually =/

Anyway... I think you meant AMD in what you wrote, gamerk.

Cheers!
 
Exactly, only a V6 mustang. What exactly do u get in it, in terms of features, comfort, performance, built quality? Its a plastic box that typically girls drive for looks.

Like I said, American cars fail at efficiency, built quality, interior materials, and especially resale value. Even if I could get essentially the same car with two difference badges (Pontiac Vibe/Toyota Matrix, Toyota Corolla/Chevy Nova) I would probably stick to Japanese cars for better resale value 5-8 years down the road.

Sure now and then u will get guys saying "zomg Honda never built a 20 liter V8 that revved all the way up to 2300 rpms" but they built some of the best, winning Formula 1 engines. They got the tech and experience in powerful engines, they just don't touch a very niche market of race cars during tough economic times. They scraped next NSX even. Honda and Toyota dominate family car market because thats where them monez be ats yo
 

If your running a discrete gpu that "disables" the IGPU, absolutely I want to offload more work to the IGPU. In this case it doesn't matter what kind of program, I want to use 100% of that IGPU.

For just an APU, its a bit situational. If im running a program that has almost no graphic overhead, absolutely I want to offload more work to the GPU (photoshop, Video encoding, ect) If im gaming, chances are I won't be running the IGPU, but in this extreme case, you are right. But to how many people does this even apply to? Not very many, but lets examine what happens anyway.

If the IGPU isn't disabled, how many shaders does AMD allocate to FP calcs? Thats up to AMD, but consider an A10 running at an A8 gpu instead. If thats the case, your not hurting yourself very much in the GPU.

50112.png


You still beat the crap out of Intel's HD 4000. :sol:

The A8s lose two SIMDs (and eight texture units), creating a 256-shader component. Thats not much of a graphic performance hit for losing 1/3 (aka 2 of your 6 SIMDs) of your shader cores (2 SIMDs or 128 shader components converted to FPU work). So now consider this game being extremely FPU heavy. Your losing 4 fps to the grapchics core, but gain 8 fps due to having a stronger FP ... I don't think I need to do the math.

Jumping to conclusions without testing or any reasoning, well, I don't see it as a bad thing. I would speculate that initially AMD will allow "user control" for GPU offloads in games that way you (or the APU driver) can decide when or how much is appropriate. In the case of a DGPU, its 100% appropriated to FP. And yes, most likely there will be some driver level control initially for HSA as performance will be case by case.



Since when does Intel's cpus stink at FP math?

Yes I know you meant AMD, but I was returning your favor of nitpicking everyone for their "core vs thread" where your always pointing out CORES != THREADS
 
As far as cpu's go, they don't. Graphics cores are far better for FP math. I think AMD is moving towards removing the fpu from their cpu's entirely, and using the on-die graphics for FP math instead. If they beat Intel to it, then they will have a huge advantage.
 



Have fun with that souped up lawn mower! :kaola:
 



Car and Driver 10 Best Cars of 2012 and Kelly's Blue Book Top Ten Family cars 2012


3 American cars in the top 10 best-ENOUGH SAID!


http://editorial.autos.msn.com/car-and-drivers-2012-10best-cars#1

http://www.kbb.com/car-reviews-and-news/top-10/10-best-family-cars-of-2012/?r=788367368513718300&selectedindex=10
 
Qualcomm joins HSA Foundation
http://www.fudzilla.com/home/item/28958-qualcomm-joins-the-hsa-foundation
i hope cpa and this hsa foundation effectively helps amd.
by the way...
The HSA Foundation was created back in June at AMD's Fusion Developer Summit as a foundation that will deliver new user experiences through advances in computing architectures in order to improve power efficiency, performance, programmability, portability across computing devices and general support of software across a broad spectrum of devices in order to remove the need for code rewriting for various different platforms.
amd does not pay attention to 'real gamers, enthusiasts and power users (tm)' who don't care about power efficiency. this is a good thing for amd, focus on the areas that will help them make money. 😀
 
HSA will be AMD's in the not to distant future, intel will need to come up with nothing short of "Greatest graphics core ever......" and that is not going to happen other than without assistence from AMD or Nvidia and I don't see that happening either.
 

amd have been trying to make a dent on intel's stranglehold on workstation cpus for years. may be they can do it with workstation class apus. they need to fix bottlenecking their igpu first, and provide timely and solid software support.

i don't see hsa's appeal to regular users other than gamers. for regular users it will be another marketing gimmick like 'moar cores' and '2 billion transistors' etc.
 
Can someone enlighten me pls?

Tom's review at June stayed that, Trinity/Piledriver/5800k is up to %15 faster than Bulldozer on same frequency.

It was stayed that, on itunes 5800k is %15 faster, on 3dsmax 2012 it is %15 faster either.

How come all the review sites says othervise now? What is wrong with Trinity now? Or what was wrong with Tom's review?

How did this much of perf. difference occured in that review?

What we see in reviews are nowhere near to this, right?

per%20core%20itunes.png

per%20core%203dsmax.png


http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-2.html



 
what are those 'other reviews'? did they compare trinity to zambezi the way toms did? links to some of those reviews?
if i remember correctly, every amd-biased review site like kitguru, eteknix, bjorn3d, rage3d gave trinity apus 'must have', 'only gaming cpu/apu one ever needs' or similar verdict. others followed amd's guidelines for trinity reviews and gave similar results. the consensus among reviewers is that trinity is good.
tom's test showed that at the same clockrate and configuration (dual module vs dual module), a10 5800k performs 15% faster than fx8150 at those two tasks (itunes and 3dsmax). every other application will not show exact same scaling in every benchmark. toms' benches are better compared to their own benches.

edit:
moar ghz: trinity overclocked to 7.3 ghz
http://www.fudzilla.com/home/item/28965-trinity-overclocked-to-73ghz
 


Selective benchmarking. Different benchmarks will scale differently, and the ones review sites love tend to be the most threaded, and thus benefit Trinity the most.

If benchmarks for any CPU show a 15% max gain, then you can basically guess the "real world" gain will be about half of that in practice. Never get sucked in by "best case" benchmarking.
 
Its funny how benchmarks are looked at differently depending on what colour you prefer. AMD has its strengths, Intel has its strengths, benches will show you a bunch of nice coloured graphs with numbers like a elementry school reading book, people analyze it, and call Bulldirt........then we continue using what we like to use anyways.
 


I use all manufacturers components, it comes with the territory. Historically I prefer the ettiquetes around AMD setups.

I do however support the price to performance argument and I believe it can work in a more holistic manner than often employed. I have often given my opinions on both Intel and AMD as well as Nvidia just to keep it objective, obviously though...nobody likes mentioning AMD setups.


---------------------------------------------------------------
http://www.extremetech.com/computing/135388-amd-cpu-bonanza-trinity-desktop-prices-intels-counter-and-the-piledriver-fx-8350s-performance

I liked this read I think it is well written. I do move to the last sentance on Steam Roller, now that is one that utterences of 4 and 6 core veriants have been mentioned, also I have heard about the change of approach to modulization for Steam Roller, from the utterances it suggest AMD have abandoned the shared resources approach, neither power or performance benefits with that approach, but it will still be moduled architecture.

http://www.extremetech.com/computing/135105-amd-details-steamroller-cpu-architecture-a-refined-piledriver-with-a-dynamic-l2-cache

Article on steam roller. The only thing that I want is consistant performance updates per release, it looks promising all in all, I am wondering where the iGPU goes.
 
Like I said a few pages back... Vishera will have a 7%-9% advantage clock per clock over Zambezi overall, according to what Trinity showed. And that is counting L3 cache IMO. Maybe with games it would be more, but never averaging 15%. Hope I'm wrong, but I don't think I am either.

And it's not bad by any stretch, but not the 15% best case figure people want to believe.

Cheers!
 


OBR is just a internet troll and has butchered the english language, fair enough people speak different languages but to be so far of grammar and syntax is completely unacceptable......along with his excessive internet trolling, did i mention that?



Yeah its always been between 5 and 15% mostly around 10% and that is still a credible improvement considering its limited to refinements.

 
Status
Not open for further replies.