AMD Piledriver rumours ... and expert conjecture

Page 251 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
The whole point about OpenCL and OpenGL is that they are a vendor-neutral specification. I only have experience of writing OpenGL programs but there is no particular issue with writing OpenGL code for AMD and Nvidia platforms (more of a problem for Intel). Of course there are vendor-specific issues, just like there are differences between IE and FireFox in HTML support, but 99% of the code is within the specifications.
 


The problem with OGL is that, frankly, its a horrid API. Multiple function calls to create a single object is SOOOO 90's. So ATI and NVIDIA both added extensions to the API that make is usable. Of course, this breaks any cross-platform compatibility.

Nevermind that ATI/AMD OGL drivers have historically been very sub-par.
 

hard to compare performance / dollar since in many cases you can't just buy a cpu and run it. you need all the other parts.
 


That's where AMD has allways been, on performance per dollar, and if they don't improve performance Intel won't need to improve their performance per dollar meaning expensives chips with little improvement
 


Dozer was just so much of a flop that it takes a while for the aftertaste to go away. AMD CPUs have been stagnant for.... 3? 4? years now? All they do is raise clocks. Same I believe will be with PD, most improvement from higher clocks.



More like Core2 killer :sweat:
 

:pfff: Just like with Trinity, right?
per%20core%20itunes.png

per%20core%203dsmax.png
 
I never saw this kind of fuss and cherry picking with Pentium 4 and Pentium D. People have sat and tried to quantify the IPC scenario, in benches that show a huge drop off, mostly old and never been updated or optimized for the bulldozer module......write off. Performance otherwise is so relative to concurrent chips the fuss from "NON AMD" users is baffling. I would say the bigger concern is power and heat but oh well.

P4 and Pentium D compensated IPC loss with deep pipeplines running higher clock speeds, so much so they were not that far behind Athlons in overall performance, yet they were called crap but never did anyone climb over the fact that a P4 could literally heat a room on a winters day.

Anyways I have played around with enough chips to realize putting so much endeavor into BD not quite being the alpha and omega....but build a bridge and deal with it.
 



Let me start off and say I am not trying to get into a Fanboy flame war
I use AMD and Nvidia and I use AMD and Intel
I have no emotion invested in a corporation.

So with that said I am confused
When we were on the Cinebench 11.5 benching thread the AMD cards beat the Nvidia cards in the OpenGL part of the test.
My HD 5770 was beating much higher end Nvidia cards in OpenGL
Nvidia cards that would destroy my HD 5770 in DirectX games were scoring lower than me on the OpenGL bench
Why is that?
 


Before C2D AMD clocks always counted for more than Intel, but Intel made up for it by pushing the frequency so high that Pentium chips performed just as good or better. P4's pipeline was so deep that the Intel's manufacturing was unable to hide the flaws in the design. This forced Intel to develope an efficient architecture for the first time, resulting in the current situation we find ourselves in.

BD's branch prediction and cache (Mainly L2) characteristics were so bad that AMD was able to improve IPC by a tangible amount without a complete redesign of the architecture. PD is so much better than BD because they were able to fix a bunch of things that were holding BD back in terms of both power and performence.

Contrastingly, Sandy Bridge is already such an efficient architecture than Intel had to work much harder for 10% IPC improvement.
 
I'm just hoping that AMD keeps Piledriver on AM3+ so that the mobo I just got will be useable for the life of it's warranty(ASUS Sabertooth990fx R.1)

All things being relative, since I'm using it for gaming, most of the benchmarks I've seen for Bulldozer show it within tolerances of acceptable performance, with massive cost savings. What I mean by that, is a decent Mobo and BD PCU will run you around 290-350 at this point(400 on the high end), while with intel you're going to be forking that much out JUST for the Mobo or chip, not both.

Basically, what I'm getting at, is that as a complete system, I can save money on the mobo/pcu, and put it towards a better GPU, so for any non-cpu limited game(most that I've seen benchmarked), I'll be performing as well or better than the person who went intel, dollar for dollar. OR for the same performance, they'll have spent more.

Also, the mobo I've seen Intel vs AMD don't give you NEAR as much for the same price.(I was considering an I5 2500k, but the fact that you get around half the usb's etc. as the AMD board, with none of the good features, helped sway me)
 


The performance/dollar disparity isn't as large as you make it seem like, but I agree for the most part. Fortunately AMD is willing to lower their prices enough to compete with Intel.
 



I cant for the life of me understand why Amd made this design when Intel did the same thing and they extremely failed, and for you to say deal with it well OK lets see Amd say that to their fans and lets see what their stock(contracts with companies fanboys, Future sales) will be like tomorrow.

Not to mention Amd needs to get their act together in the server market which is more important then anything i hope they really do deliver with Steamroller. I really don't see to many people buying Opterons over Xeons this is probably do to some contracts but its also due to the lack of performance and Performance per watt Sure Amd Opterons are cheaper then their Intel counterparts but most of the time companies rarely upgrade their server hardware so when they do upgrade they look for the best and the best for performance per watt.



I don't understand this as well when you can get a I5-2310 and a decent board for around 250-280$ and have a great gaming CPU, better then anything Amd makes right now(and a 8150 is 190$).
I would however say a Phenom II x4 965 is a great deal since you can grab one of those and buy a 212+ for 30$(you could probably overclock the 965 to 3.8-4.2Ghz without to much trouble) and then buy a decent board and you can probably save 50$ or so going with Amd but that is usually it since i still say you should overclock a 965(need a heat-sink) to around 3.8ghz at least to get rid of possible bottlenecks with CPU intensive games. But hey 50$ could get you a 7850 over a 7770.
 
I am not saying that BD was ever acceptable, but its not entirely bad either, some will adopt the narrowmindedness towards it but the more you play with it you can sort of find performance. A factor overlooked is the number of tests done were it more than holds its own between intels highest end i5 and i7's sometimes edging both. It is not failed architecture, it is unrefined architecture which has a way to go yet.

As for heat and power, AMD are gradually making progress but no its not at mega billion fab level of Intel.....that is rather obvious.

Yes PD is a improvement, as to how much will depend on how much vested interest a particular end user places in it. If you expect golden graffiti then you may not be satisfied, but there are tangible gains on all fronts so it may just please enough on progress alone.
 
internet is gearing up for trinity release. trinity is coming! coming! coming! coming....

http://www.tomshardware.com/reviews/trinity-gaming-performance,3304.html
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224.html
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-trinity-apu,3241.html

AMD Performance Edition Memory Review
http://semiaccurate.com/2012/09/26/amd-performance-edition-memory-review/
HD Gaming For The Masses: A Trinity Preview
The best APU every year...
http://semiaccurate.com/2012/09/26/hd-gaming-for-the-masses-a-trinity-preview/
Trinity by the numbers, speeds and feeds
Cores, counts, and megahertz, not much more
http://semiaccurate.com/2012/09/26/trinity-by-the-numbers-speeds-and-feeds/

AMD Trinity A10-5800K vs Intel Ivy Bridge i5-3470 - Discrete GPU Gaming Performance
http://vr-zone.com/articles/amd-trinity-a10-5800k-vs-intel-ivy-bridge-i5-3470--discrete-gpu-gaming-performance/17272.html
First Look: MSI FM2-A85XA-G65 - Military Class for Trinity
http://vr-zone.com/articles/first-look-msi-fm2-a85xa-g65--military-class-for-trinity/17253.html

Desktop Trinity coming next week, in OEM flavour
http://www.fudzilla.com/home/item/28905-desktop-trinity-coming-next-week-in-oem-flavour

AMD A10-5800K & A8-5600K Review: Trinity on the Desktop, Part 1
http://www.anandtech.com/show/6332/amd-trinity-a10-5800k-a8-5600k-review-part-1

AMD Trinity: An iGPU Performance Preview
http://www.bjorn3d.com/2012/09/amd-trinity-igpu-performance-preview/

AMD Trinity FM2 APU Preview
http://www.techpowerup.com/reviews/AMD/FM2_APU_Preview/

AMD Trinity for Desktops. Part 1: Graphics Core
http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics.html

edit 2:
tweaktown's roundup of links
http://www.tweaktown.com/news/25955/amd_trinity_preview_roundup/index.html

i just started reading 'em...

edit 1:
and finally... techreport lets the cat out of the bag... 😀

AMD attempts to shape review content with staged release of info
http://techreport.com/blog/23638/amd-attempts-to-shape-review-content-with-staged-release-of-info

let's face it, intel, amd, nvidia all do this. it's useless to pick a 'lesser evil' or such... :)
 
^
Alas, AMD has asked us to refrain from publishing overclocking and CPU side benchmarking results until 2nd of October.
seems like they don't want to hype pd's performance and overclocking capability

amd is keeping it like a girl's skirt
long enough to cover the subject (as htpc use) and short enough to maintain intrest (as pd sounds good) 😗
 
I'm now officially sure PD is going to suck:

http://techreport.com/blog/23638/amd-attempts-to-shape-review-content-with-staged-release-of-info

Here's the tidbit that caught my instant attention:

The idea here is for AMD to allow a "preview" of the product that contains a vast swath of the total information that one might expect to see in a full review, with a few notable exceptions. Although "experiential testing" is allowed, sites may not publish the results of non-gaming CPU benchmarks.

I can see where this is heading: Review sites saying "its as fast as IB!". In games, in GPU bottlenecked situations. Then the chips release, and boom, we find it stinks, just in time to screw the early adopter.

Call me bias, but the only reason to restrict information is to hide how bad the chip is.
 


Yeah we got strict order from our AMD head office that absolutely no utterance of any results trinity or vishera must be made, basically it is to avoid the media circus of the last release.

Radeon RAM though is fantastic though 😛
 



Not necessarily. Maybe they know publishers would take moneys from Intel and post biased benchmarks, and they did not feel like buying them with their own money, so they just told publishers not to talk about their CPU for a bit. :pt1cable:
 


Hmm, by that criteria, Netbust wasn't a 'failed architecture' either - just had a long ways to go to get to 10GHz 😀
 
Status
Not open for further replies.