AMD Piledriver rumours ... and expert conjecture

Page 133 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
what about Cyberlink apps like expresso?
I use PowerDirector 9 and PowerCinema and while they have support for ATI Stream it seems a little limited
most of the time it really doesnt use my HD 5770

Are they free or sponsored? I kinda like to use what's free and/or open-sourced if I can... Besides, I don't do any Video Editing at all, just re-encoding or encoding from my camera/phone and stuff... I have an example in 720p from new years I can post for you guys done in x264.

A Quote from me about quality

So let me get this correct I shouldn't take encoding into account when I purchase a cpu, even tho I perform that function 95% of the day (pretty much is how I make my money) Are you serious?

Look when I started out encoding my machine cpu was p4 3.2 478 socket with a AIW ATI 9600 if I was running that program once it took about 25 to 30 minutes to encode, well back then that program opened and running 6-8 at the same time that would take 2-3 hour.
Now fast forward to my machine today I can open that same program 9 time and (get better quality faster)
all 9 can be done at the same in no more then 15 minutes.
So I would totally disagree with you on this.

Ah, my bad; didn't read that.

Still though, I'm curious how you do the encoding. It's just like a factory settings encoding? Do you have any statistics on how your encodings end up looking and weighting? If you can, off course 😛

Anyway, I'm wondering how the FX'es actually will perform if I compile the source from x264. Paladin, do you want to give it a try? haha

Cheers!
 
no the Cyberlink are paid software but reasonably priced considering how much stull like Sony Vegas Pro goes for
I do video editing with PowerDirector 9 as a hobbly right now for my Youtube channel
but I run a small computer repair business and i am looking to expand into doing home movies like weddings and bar mitzvahs plus working with up and coming music artists
I have captured VCR tapes with my capture card so I figure alot of people have old home movies on VCR that they would like background music and titles/fades/captioning on
already talking to a member of a metal band about doing his bands videos

for me with a AM3 board it makes more sense to go with a 960T be and try to unlock or buy a Thuban BE then to go to BD or even PD
even if PD is %20 better it isnt worth it to buy new AM3+ mobo when I can use same mobo and go with a thuban
heck the 960T BE is only $99 at my local Microcenter
for anybody with a higher end AM3 Deneb/Thuban etc even PD doesnt seem to be a worthwhile upgrade even if the IPC is improved IMHO
definitley not for gaming purposes
maybe for workstation purposes a 8 core PD would be worthwhile
just my opinion
I could be wrong
 
no the Cyberlink are paid software but reasonably priced considering how much stull like Sony Vegas Pro goes for
I do video editing with PowerDirector 9 as a hobbly right now for my Youtube channel
but I run a small computer repair business and i am looking to expand into doing home movies like weddings and bar mitzvahs plus working with up and coming music artists
I have captured VCR tapes with my capture card so I figure alot of people have old home movies on VCR that they would like background music and titles/fades/captioning on
already talking to a member of a metal band about doing his bands videos

for me with a AM3 board it makes more sense to go with a 960T be and try to unlock or buy a Thuban BE then to go to BD or even PD
even if PD is %20 better it isnt worth it to buy new AM3+ mobo when I can use same mobo and go with a thuban
heck the 960T BE is only $99 at my local Microcenter
for anybody with a higher end AM3 Deneb/Thuban etc even PD doesnt seem to be a worthwhile upgrade even if the IPC is improved IMHO
definitley not for gaming purposes
maybe for workstation purposes a 8 core PD would be worthwhile
just my opinion
I could be wrong

I do agree with the second paragraph, AM3/Thuban or Deneb is still a strong platform for the multi purpose user. AM3+ is good but I would only settle with a 8core BD or PD when released.

 
Time will tell for that, king. But I think you should get an nVidia or AMD card once the nvENC and VCE are working with all major encoding software out there. The codec I mentioned (LAV) uses CUDA for decoding and (I think) encoding; from what a friend tells me it does quite a good work, but don't have a first hand impression to share.

I will agree with earl here for mass produced encoding on crappy sources, since the quality you'll loose could be minimal over the speed increase of the encode. I wonder about the enhancements you could have for the old videos, though, and if they can be applied down the encoding pipeline. I think most fixed encoders don't allow filters to be applied down the pipe, don't recall to be honest =/

If you're fixed on doing it by software, then a fast CPU is all you need indeed.

Cheers!
 
"But I think you should get an nVidia or AMD card once the nvENC and VCE are working with all major encoding software out there. "

I have an HD 5770 now which supports OpenCL/DirectCompute/ATI Stream
I am not familiar with nvENC or VCE
could you PM some info on those standards?
I dont want to go too off topic out of respect for Reynod and other mods :)
 
I could of told people that IB wasn't going to be a huge performance jump. It's because Intel already made a near perfect uArch with SB. The downfall to making such an awesome uArch is that it leaves you very little room for improvement. Whereas AMD with .. BD .. has significant headway.

So the question becomes, will AMD be reach that potential before Intel design's a new uArch. Only time will tell.

I recon so, AMD have done the exapc opposite to Intel.
Intel's next plan is to make loads of cores on a single cpu with line-ups for 10/12 cores on a CPU.

Where as AMD has 8....now.

But this is just a guess.
Hope the piledriver is going to be exceptionally good!
 
AMD Korea leaks Trinity benchmarks in retail material?..."
Trinity_3DMark_11.png

trinity_a10.jpg.jpeg

http://vr-zone.com/articles/amd-korea-leaks-trinity-benchmarks-in-retail-material-/15664.html
 
I recon so, AMD have done the exapc opposite to Intel.
Intel's next plan is to make loads of cores on a single cpu with line-ups for 10/12 cores on a CPU.

Where as AMD has 8....now.

But this is just a guess.
Hope the piledriver is going to be exceptionally good!

Depends on what is exceptionally good? if it happens to sit somewhere between Westmere and Sandybridge that would be a massive leap, obviously closer to the latter would be preferable, in any event I hazzard a guess that with 15% improvements on Zambezi, it should be around Intel first gen i7 performance at the requisite price range.
 
2.3ghz with turbo or the next p state down? Not bad for 35w stock though.

It is quite impressive IMO... Being in GT540M territory as a stand alone iGPU is friggin good IMO.

The GT540M is one of the best discrete notebook video cards IMO and Trinity is sitting right on top of it, lol.

It's still behind the 635M, but not that far away.

Let's see how it fares in the CPU department when it launches, hehe.

Cheers!
 
Has there been any insight as to what the MSRP will be on the desktop models? If they are not priced competitively with the Sandy Bridge Celerons and Pentiums then AMD might as well not release them.
 
I really really doubt it will get close to a 3.5Ghz A8 CPU wise. And since there are almost no mobile Phenom IIs out there, I could guesstimate that a Llano A8 @ 2.5Ghz will be on equal terms with the Trinity A10 @ 3.0Ghz. The graphics will see a major boost though, but I wonder how it will scale to CPU speed when using all the TDP. That said in another way, The CPU throughout put won't be much different/higher than today's A8 3500, but GPU wise it will.

I'm being a lil' pessimist though, but I think is the right approach 😛

Cheers!
 
It is quite impressive IMO... Being in GT540M territory as a stand alone iGPU is friggin good IMO.

The GT540M is one of the best discrete notebook video cards IMO and Trinity is sitting right on top of it, lol.

It's still behind the 635M, but not that far away.

Let's see how it fares in the CPU department when it launches, hehe.

Cheers!

What would really make me happy is if it were backward compatible with the original socket FS1 with a simple bios update. If it is then I will be hunting through amazon daily 😉
 
We'll definitely be seeing higher clocks, the RCM technology AMD is licensing guarantee's that. BD (or BD-e) is less efficient per-clock then the older K10(10.5) though it gets a high clock rate rate then the K10.5. What we'll be needing to look at is the performance per $$ and per watt. Clock speed isn't important when comparing CPU's of different uArch's, what is important is the performance you get per cost / energy usage.

That and I absolutely hate single threaded benchmarks, defeats the purpose of advanced CPU design's.
 
IPC is the key...

Which instructions you are talking about.

How many ADD's per clock? MULs? How about INC / DEC? Are we counting JMPs and INTs?

Considering each of those instructions takes a different number of cycles to complete each mix of them would produce different potential "Instructions Per Clock".

Then we got the SIMD sets, where one instruction process's multiple math operations. Are we counting that as a single instruction or as three instructions.

"IPC" is a made up term used to denote the efficiency of a particular uArch, particularly with respect to cache usage and prediction. It was widely used during the Pentium 4 days when talking about the Athlon chips. Intel was using the marketing technique that it had more "Megahertz" and thus was faster. Once you go to SMT / CMT it loses most of it's meaning as you now effectively have 2~8 times the processing resources as previously. Any attempt at "IPC" ratings in a single threaded test is only measuring 1/(core count) of the processing resources. Clock super-charging techniques like turbo-boost only server to further muddy the water.

Ultimately the only thing that matters is what you can get for your money and how hot it'll run. Otherwise it's just members of one label taunting members of another label. Mind as well have people put on blue and green face paint and run at each other with maces and swords while screaming battlecrys.

-=Edit=-

Listing of the entire x86 instruction set.
http://en.wikipedia.org/wiki/X86_instruction_listings

It's big, it's bulky and there are a ton of different wants to do the same thing.
 
IPC is the key...


What if i have a 3.0Ghz Cpu and you have a 1.0Ghz CPU with twice the IPC.....No i'm joking with you! I HATE THIS SAYING!! AMDZONE be Damn! :kaola:

Who ever says this needs to understand that Their is no 1.0Ghz flagship CPU's anymore! Amd only has a 600mhz lead or so. But their cores are only 50% as good as Intel's per clock!

To step up with Intel(Which we all want but know it isn't happening) They would Twice the performance per clock or at least 85% as powerful with a 20% or so higher clock rate which just isn't possible really given Intel's high 3.5Ghz clock already and Amd's approach with the module design which is best for multithreaded apps.


I'm sure their is Engineers here who can tell you guys this just isn't reasonable to have a CPU clock much higher then 4.2Ghz at stock(If that with 6+ cores).

So All Amd can compete with is Graphics and Moar cores. Which I'm quite proud of Intel and its ivy bridge when it comes to the graphics no longer will it be a bottleneck for 1080P video or 3 monitor video for business tasks. And as the future goes on Intel will give users more cores for less money and Amd will start to lose its Moar cores lazy strategy. But i just hope Intel wont catch up to AMD(edit) in the graphics department(when it comes to apu's) If so then darn!


Come On amd get to work! Keep taking Market share from Intel!

http://www.itworld.com/hardware/271752/amd-gains-x86-processor-market-share-intel-q1
 
Status
Not open for further replies.

TRENDING THREADS