AMD Piledriver rumours ... and expert conjecture

Page 146 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
amd's quick sync counterpart is 'video codec engine'. it's found in current 7700 and higher series gfx cards. amd hasn't activateed support for it (typical amd). trinity will also have vce. afaik llano apus don'thave anything similar to quick sync.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/9
you sure about that?

http://developer.amd.com/sdks/AMDAPPSDK/Pages/default.aspx

The problem is AMD developed it and didn't push it at all, until recently. Hopefully this is a sign that Read will push more development into AMD's techs instead of saying "this is what it can do, now use it".

http://blogs.amd.com/fusion/2012/04/24/adobe-and-amd-enable-brilliant-experiences/

In fact, a mainstream notebook PC based on the AMD A8-3530MX APU is up to 672% faster when accelerated by the horsepower of the AMD Radeon™ graphics technology in the APU.*

Granted VCE is more geared directly to video encoding and should be faster, ATI Stream is also usable for Video encoding.
 
I don't think so! But then again were guessing, i would say 12$ is practical but 25$ No way i wish though that would be great news it means Amd would have more money for research and can even pay off their debt.

Well, considering AMD was trading at what, less then $1 just a year or two ago, I don't think any investor is going to be disappointed if AMD "only" hits $12/share.

As an aside, I can't help thinking we need the "Round 3, FIGHT!" sound effect from Mortal Combat to play every time someone opens this thread. 😛
 
huh. totally forgot about open cl and stream. i kept thinking vce vs nvenc (or whatever nvidia's is called) vs quick sync.
iirc some review site did a comparison long ago and found qs 1.0 to have higher quality than amd's gpu based transcoding and nvidia's cuda based transcoding. then nvidia's nvenc came out with gtx 680 which turned out to be as fast as quick sync 1.0.
The problem is AMD developed it and didn't push it at all, until recently.
yeah, they should have done that earlier. imo amd's approach with vce is better.
about mercury playback engine... strangely adobe's site only lists two amd gpus.. on macs. i don't really understand why the amd gpus listed for macs only, i leave that up to more knowledgable people.
http://www.adobe.com/products/premiere/mercury-playback-engine.html
http://www.adobe.com/products/premiere/tech-specs.html
edit: er.. if i understood correctly, qs, vce and nvenc are seperate logic circuits built specifically for transcoding/editing unlike open cl or cuda which are driven by gpu cores..?
 
"Intel is like the USS Enterprise ... doesn't stop often for refueling." ReyNOD you said it best! I have 2 SB 2500k chip machines and if I was building new an Ivy Bridge would be in it. Look at Tom's Hardware CPU Hierarchy for gaming CPUs. AMD's top is not the 8150FX or 8120. Good grief it's the 4100!

I have been loyal to AMD almost to a fault for years. Last Intel CPU I owned was a P60 ( remember that debacle?). However, after the Phenom IIs AMD lost it's way and Sandy Bridge blew it out of the water. Don't believe me? Jump on any Sandy Bridge, or now Ivy bridge machine. FAST!

What I hope is that Piledriver quietly rights the AMD ship for higher end cpus. Marketing slides shouting "8 cores!" won't do it. Improved solid performance will. Let's hope!
 
huh. totally forgot about open cl and stream. i kept thinking vce vs nvenc (or whatever nvidia's is called) vs quick sync.
iirc some review site did a comparison long ago and found qs 1.0 to have higher quality than amd's gpu based transcoding and nvidia's cuda based transcoding. then nvidia's nvenc came out with gtx 680 which turned out to be as fast as quick sync 1.0.
The problem is AMD developed it and didn't push it at all, until recently.
yeah, they should have done that earlier. imo amd's approach with vce is better.
about mercury playback engine... strangely adobe's site only lists two amd gpus.. on macs. i don't really understand why the amd gpus listed for macs only, i leave that up to more knowledgable people.
http://www.adobe.com/products/premiere/mercury-playback-engine.html
http://www.adobe.com/products/premiere/tech-specs.html
edit: er.. if i understood correctly, qs, vce and nvenc are seperate logic circuits built specifically for transcoding/editing unlike open cl or cuda which are driven by gpu cores..?

Toms testing of quality.
http://www.tomshardware.com/reviews/video-transcoding-amd-app-nvidia-cuda-intel-quicksync,2839-10.html

Anandtechs
http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/9

Cuda looked horrible on anandtech's tests.

I also read somewhere, can't find it now, that QS file output size is actually larger than fully cpu driven, leading to believe that QS also removes some of the compression.

For my personal use, I am extremely picky on quality. I use software, granted its slower, but I don't have any "anomalies" that would make me delete the file and redo it.

Then again, one of the guys I work with is all about making the smallest file he can.
 
Well, the key to Trinity's success is not only the graphics prowess it has right off the bat, it's the software around the graphics power that needs to be shown (this is leaving the CPU aside).

My point is: when next console gen arrives (if they do) they'll set up the bar a little higher and we'll see if Trinity has the power to survive in the long run as a viable portable-gaming platform. Also, Flash and HTML5 are getting very good HW acceleration nowadays and games/apps WILL start using it more. That's also related to decoding capabilities and such, inside a web browser.

I'm really curious about the software layer they announced to make a real "fusion" with the additional layer they want to promote. I think the long term success will be that important piece; even more than the hardware itself.

Cheers!
also better gpu performance for cheaper is something it is good for.
 
i actually agree to a certain extent
I think that AMD has a decent strategy right thinking long term
they need to make more a brand presence to the general consumer though
if they had a cool commercial with a Bulldozer running over intel computers and placed during sporting
events that would work
I will repeat myself
consumers dont understand IPC
they understand more cores and higher frequencies though
Intel proved that with the Pentium 4 with the higher frequency push
AMD back in Oct was about 5 a share
now last I looked it was around $9
and I think it will climb by end of the year

Actually it was Nov. 23, 2011 when AMD closed at $5.05. Today it closed at $7.18.

Back on Sept. 15th of last year, it was trading at $7.34 and about 2 weeks later, dropped to $4.53 or nearly three bucks a share. So gambling on stock prices can be risky.
 
I was quoting info that you have to pay to get,that hinges on AMD following their tick tock schedule being able sell improved models each year.and both the Chinese and Indian markets .

Wasn't there a news article here (or maybe on S/A or BSN) about the Chinese pushing their own in-house CPU for next year or so? Complete with their own instruction set, IP-tracking and other 'security' features 😛 so the gov't can lock their citizens out of the Internet, or at least specific websites. If so, you can subtract that 15% of the world's burgeoning population from potential AMD and Intel sales..
 
by the end of [strike]2014[/strike] 2007 perhaps Intel will have a solution .

even [strike]Haswell[/strike] Conroe won't beat [strike]trinity[/strike] Barcelona ...

I see AMD gaining big in markets they never were in before those gains are off Intel's back...

Piledtiver ,steamroller ,excavator Bulldozer will be formidable ..

Look familiar yet?

Most enthusiasts here have learned the hard way to never count AMD chickens before they are fully hatched & independently benchmarked 😀..
 
Look familiar yet?

Most enthusiasts here have learned the hard way to never count AMD chickens before they are fully hatched & independently benchmarked 😀..
Which is why I really don't count anyone's chicken at all. I really doubt trinity would be amazing but it would do what llano did but slightly better. Would be hard to mess that up.
 
also better gpu performance for cheaper is something it is good for.

Not quite; the "cheaper" argument is sometimes blown out of proportion. I understand that better performance for less money is good, but you always have options at certain price points. Llano is not the undisputed king in mobility, take that into account. Take a look at the GT540M variants (GT525M) and you'll see that the A8+discrete is in i5 + GT525M territory; this is 14"+ form factor. The A8 has to leverage some things to be competitive against the i5 in features (not only graphics) such as 1080p, 8GB RAM and chassis.

Now, in smaller form factors (13" and 14"), the A8 is almost always the best in P/P, but they sacrifice build quality most of the time TBH.

Anyway, yes. I agree that getting a better product for a cheaper price is always good for us. That's why competition is always healthy and we have to protect that.

Look familiar yet?

Most enthusiasts here have learned the hard way to never count AMD chickens before they are fully hatched & independently benchmarked 😀..

I aligned some benchies in Linux for BD with SB and they're not that far away now using GCC 4.7. Not that it becomes a rocket, but they do get much more performance with a new compile using the new instructions. I know it's beating a dead horse on windows side, but it's just saddening you can't compile something, or at least get access to a different optimized executable for you CPU. It's not like they have to make huge changes in well coded programs to use a different compiler =/

I'm still waiting for new revisited and re-compiled versions of windows programs that support BD. We should see a lot of performance gains if what GCC showed is easily "ported" to windows compilers (don't remember how this works with compilers).

So not all hope is lost for AMD. They really have to play the software card now or they'll really "byte" the dust, hehe.

Cheers!
 
Not quite; the "cheaper" argument is sometimes blown out of proportion. I understand that better performance for less money is good, but you always have options at certain price points. Llano is not the undisputed king in mobility, take that into account. Take a look at the GT540M variants (GT525M) and you'll see that the A8+discrete is in i5 + GT525M territory; this is 14"+ form factor. The A8 has to leverage some things to be competitive against the i5 in features (not only graphics) such as 1080p, 8GB RAM and chassis.

Now, in smaller form factors (13" and 14"), the A8 is almost always the best in P/P, but they sacrifice build quality most of the time TBH.

Anyway, yes. I agree that getting a better product for a cheaper price is always good for us. That's why competition is always healthy and we have to protect that.

Cheers!
I always found the A8 products to be facing off against intel's i5s without any dedicated gpu. The A6 competes with the i3 and such. At below $800 its pretty hard for someone to find a computer quite as good in terms of graphical performance as the AMD APUs. The build quality will definitely have some differences but if you looks at the comparable HP laptops with both intel and AMD, the price difference between anything intel with a comparible dedicated gpu is generally much more than what AMD offers. Unless you order online, its hard to find anything less than an i7 that even comes with dedicated AMD or nVidia GPUs. Looking at laptops from bestbuy, its pretty pathetic what the intel laptops are for their price point in regards to gpu performance.

Im not expecting like half the price of intel magic but I expect the A10 trinity to be priced at the same price as the i5 ivys for about the same quality. I'd say thats a pretty competitive point for better graphical performance.

Would be pretty amazing if AMD ultrathins can hit the $600 mark tho. Even if they have to remove some of the things like ssds, its still an attractive option for many people who just want a nice laptop that can do what they need and maybe play a few games.
 
I always found the A8 products to be facing off against intel's i5s without any dedicated gpu. The A6 competes with the i3 and such. At below $800 its pretty hard for someone to find a computer quite as good in terms of graphical performance as the AMD APUs. The build quality will definitely have some differences but if you looks at the comparable HP laptops with both intel and AMD, the price difference between anything intel with a comparible dedicated gpu is generally much more than what AMD offers. Unless you order online, its hard to find anything less than an i7 that even comes with dedicated AMD or nVidia GPUs. Looking at laptops from bestbuy, its pretty pathetic what the intel laptops are for their price point in regards to gpu performance.

Im not expecting like half the price of intel magic but I expect the A10 trinity to be priced at the same price as the i5 ivys for about the same quality. I'd say thats a pretty competitive point for better graphical performance.

Would be pretty amazing if AMD ultrathins can hit the $600 mark tho. Even if they have to remove some of the things like ssds, its still an attractive option for many people who just want a nice laptop that can do what they need and maybe play a few games.


I got my A8 3820 for 550$ in my laptop and their was no laptop that Intel produced within that price range that could play games the way this laptop does. IMO A A8 laptop is not worth more then 650$ unless it comes with a SSD.
 
Ok got done doing preliminary benchmarks on my 3550MX this weekend, and wow is all I gotta say. This chip has plenty of headroom, just needs to be tweaked, also I'm completely bamboozled on why AMD didn't push this further.

3550MX stock, no modification

B0: 2700 @1.3500
P0: 2000 @1.1125
P1: 1700 @1.0875
P2: 1600 @1.0625
P3: 1400 @1.0250
P4: 1200 @1.0000
P5: 1000 @0.9625
P6: 0800 @0.9375

CB 11.5

CPU:2.57
Single Thread:0.64
MP Ratio: 4.03

Watching HWinfo64 I could see all four cores at 2.0Ghz the entire time, temps were 60~65c

3550MX Overclock / Undervolt modification with K10
B0: 3000 @1.3500
P0: 2700 @1.3500
P1: 2000 @1.1125
P2: 1600 @0.9250
P3: 1400 @0.9125
P4: 1200 @0.9000
P5: 1000 @0.8875
P6: 0800 @0.8750

Up: 200ms @ 60%
Down: 2000ms @ 20%


CB 11.5
CPU:3.22
Single Thread: 0.75
MP Ratio:4.29

This is allowing Windows to dynamically move the thread around. This resulting in the four cores constantly clocking at 2.7Ghz, they never made it to 3.0 due to the thermal headroom being hit. Temps were 89~94c but holding steady and not rising. When I set the affinity mask to core 3 I got the following.

Single Thread: 0.86

HWinfo64 had Core 3 pegged at 3.0Ghz and temps at ~64c and holding. The other cores clocked down to their 800 idle speed.

Finially I clock locked (K10 has the ability to force a core to a set speed) core 3 at 3.0ghz and reran the single threaded test.

Single Thread: 0.87

Ran it over and over on both and got a consistent minor improvement. I can attest this to core not having to spool up from 800 to 3.0 during the start of the test.

K10 allows you to adjust the spool up / down times. The way it works is if a core is being hit hard (60%) for a certain period of time (200ms) it will force it to the next P state, if the core is under low utilization (20%) for a certain period of time (2000ms) it will force it to the next lower P state. With HWInfo64 up you can actually see the cores going up / down.

I really hope trinity keeps this level of control over the CPU. Using my own manual tweaking I was able to get a very large performance boost of 25.7% on multi-core tests and a solid performance boost of 15~17% on single threaded tests.

If you know the type of program your going to run, you can further gain performance by manipulating the affinity flag to force it to run on the same core(s) and thus gain from the higher clocking overhead. Window's scheduler is under the assumption that every core is the same speed which is why nobody can get any benefit from boosting technology. You have to manually manipulate the settings yourself to get the most out of it.
 
I got my A8 3820 for 550$ in my laptop and their was no laptop that Intel produced within that price range that could play games the way this laptop does. IMO A A8 laptop is not worth more then 650$ unless it comes with a SSD.


Maybe you mean A8-3520M. The A8-3820 is socket FM1 not FS1, its for desktop use not laptops. Highest CPU for socket FS1 / notebooks is the A8-3550MX.
 
1.35v is the highest the chip will go and the default for the 2.7Ghz setting. I'm going to see if I can under volt it at 2.7 to get the thermals down, I was really uncomfortable with it at 90c. It held and was stable, but that is not a temperature I'd like my laptop to be running at. Just shows it's the cooling solution not the chip that is the limitation.

That 3.22 was 2.7Ghz on all cores, I was very surprised that it help @2.7 on all four, my GF's 3530mx won't hold all four @2.6 for very long, eventually they start slipping down to 1.9. My theory is the 3530MX's were samples that were picked aside due to their ability to hold high clocks at lower voltage the the rest of the laptop chips. Gonna see how much lower voltage I can push the 1.6 and below states.
 
I am thinking that if it got 3.22 at 2.7ghz
then going with the same clock speed IPC reasoning if it was at 3.5ghz like my PHII Deneb then it would probably post a better CB11 score
I would be curious to clock down my Deneb to 2.7 and run CB11 and see how it compares
 
I am thinking that if it got 3.22 at 2.7ghz
then going with the same clock speed IPC reasoning if it was at 3.5ghz like my PHII Deneb then it would probably post a better CB11 score
I would be curious to clock down my Deneb to 2.7 and run CB11 and see how it compares


PHII Deneb has L3 cache, this doesn't but does have a larger L2 cache. I would think the doubled size of the L2 makes up for the lack of 6MB Shared L3.

Later going to see if I can test the GPU components.

Side Note,
I found out why HP is shipping the extra 4GB sticks as DDR3-1600 coupled with their regular 2GB DDR3-1333. Those 4GB sticks aren't being detected as DDR3-1600 by the motherboard BIOS, one of them is LvDDR3 (1.35v instead of 1.5) and Llano only operates at DDR3-1333 with LVDDR3. I'm assuming these are part of a stock that didn't pass validation testing for DDR3-1600 on some of their higher models and their packaging them as "free" to get rid of them. Now I'm looking at buying some crucial DDR3-1600 CL9 memory that's been tested and known to work with this laptop. From what I've been told their only $40~$50 USD for a matched set.
 
Status
Not open for further replies.