AMD Piledriver rumours ... and expert conjecture

Page 132 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Thank you that's all I want to say is what you said. :lol:

Well, I do encoding as well: VirtualDub + x264 + MKVtools.

All those benchies are squat since you can't fine tune quality and compression settings at all... At least, I don't have the money to get a license of the pro apps that don't suck at encoding while using the special paths, so I just deal with the "late night encodings".

I think I said this once, but "encoding" by itself is by no means something to "take into account" when measuring a GPU or CPU "prowess". It should be quality for the encode and decode and with software we always get what you want. Anand did a very good job with decoding and encoding a while ago and Intel is way back in the quality department is shameful.

Anyway, if you have a better suggestion on encoding software to use, I'm all eyes to read, hahaha.

Cheers!

EDIT: Just read Anand's quick HTPC piece for IB. Looks like Intel managed to take into account most of the shortfalls of the iGPU. They just lack a few things, but nothing that big for 90% of the folks. That's a good thing.
 
avatar3.jpg
True, but seeing as how the L3 (and L2 for that matter) don't help BD much, maybe it won't matter
 
Well, I do encoding as well: VirtualDub + x264 + MKVtools.

All those benchies are squat since you can't fine tune quality and compression settings at all... At least, I don't have the money to get a license of the pro apps that don't suck at encoding while using the special paths, so I just deal with the "late night encodings".

I think I said this once, but "encoding" by itself is by no means something to "take into account" when measuring a GPU or CPU "prowess". It should be quality for the encode and decode and with software we always get what you want. Anand did a very good job with decoding and encoding a while ago and Intel is way back in the quality department is shameful.

Anyway, if you have a better suggestion on encoding software to use, I'm all eyes to read, hahaha.

Cheers!

So let me get this correct I shouldn't take encoding into account when I purchase a cpu, even tho I perform that function 95% of the day (pretty much is how I make my money) Are you serious?

Look when I started out encoding my machine cpu was p4 3.2 478 socket with a AIW ATI 9600 if I was running that program once it took about 25 to 30 minutes to encode, well back then that program opened and running 6-8 at the same time that would take 2-3 hour.
Now fast forward to my machine today I can open that same program 9 time and get better quality faster
all 9 can be done at the same in no more then 15 minutes.
So I would totally disagree with you on this.
 
what he was saying is that benches arent always good for encoding because quality can be different between different methods especially when you take into account Quicksync,ATI stream and CUDA
so while one setup might be much faster at an encode it doesnt do any good if the quality is poor
it will look good on a benchmark but not too good for your customer
 
what he was saying is that benches arent always good for encoding because quality can be different between different methods especially when you take into account Quicksync,ATI stream and CUDA
so while one setup might be much faster at an encode it doesnt do any good if the quality is poor
it will look good on a benchmark but not too good for your customer
I can understand that, but that don't apply to me .
I know the real benefits I've received from the cpu I purchased over my old equipment.
 
Well, I do encoding as well: VirtualDub + x264 + MKVtools.

All those benchies are squat since you can't fine tune quality and compression settings at all... At least, I don't have the money to get a license of the pro apps that don't suck at encoding while using the special paths, so I just deal with the "late night encodings".

I think I said this once, but "encoding" by itself is by no means something to "take into account" when measuring a GPU or CPU "prowess". It should be quality for the encode and decode and with software we always get what you want. Anand did a very good job with decoding and encoding a while ago and Intel is way back in the quality department is shameful.

Anyway, if you have a better suggestion on encoding software to use, I'm all eyes to read, hahaha.

Cheers!

EDIT: Just read Anand's quick HTPC piece for IB. Looks like Intel managed to take into account most of the shortfalls of the iGPU. They just lack a few things, but nothing that big for 90% of the folks. That's a good thing.

QS is kinda cheesy, it's basically built for benchmarks. Awhile back Andrew Chew did an article about storage and encoding and I noted that the output file of QS was 2x the size of the output file for the software only encoder, but they both had the same encoding settings. After doing some digging, the on board hardware encoder tends to skimp on the compression. This means that when setting Q to static your QS files will be larger then software files (depending on codec) especially under higher Q. If you set Q to variable but file size to static then the QS will have lower quality vs the software codec. Also SB doesn't support 10-bit AVC only the common 8-bit one, this makes it a dead option as most of the studios are going over to 10-bit, heck most of the fan sub sites are now on 10-bit. Granted 10-bit AVC won't play on most mobile devices due to their HW encoders only supporting 8-bit so it's not yet emerged as a mobile standard.

QS is fine for home video converting and archival, maybe even a little side project here or there, but if your doing something serious I would stay away from QS. Then again if your a serious AV buff you already know this and have your own preferred setup.

And yeah Yuka those are fine tools. I've been using Staxxrip to do my re-encodes to get my Anime to play on my WDTV Live. They've all switched to 10-bit and my WDTV doesn't support that.
 
I could of told people that IB wasn't going to be a huge performance jump. It's because Intel already made a near perfect uArch with SB. The downfall to making such an awesome uArch is that it leaves you very little room for improvement. Whereas AMD with .. BD .. has significant headway.

So the question becomes, will AMD be reach that potential before Intel design's a new uArch. Only time will tell.
 
Super Socket 7 was just Socket 7 but it was still a AMD only socket, or when AMD started to push their own setups instead of using Intels sockets. So it would still count.

Whoah I missed this one somehow. No such thing ever happened. Intel refused to license Slot-1 / Socket-370 to AMD, thus AMD decided to further develop Socket-7 with 100Mhz bus speeds. The Taiwan chipset makers then incorporated AGP and ATX features into the Socket-7 and thus it was named "Super Socket 7" much to Intel's chagrin. They were hoping the movement to an exclusive socket would cut AMD out of the ATX / AGP world while increasing revenues from from the Slot-1 / Socket 370 licensing fees. Intel originally had been very liberal on selling perpetual Socket 7 licenses, that's why you had so many people making CPU's for it. Slot-1 and forward Intel refused to sell licenses for anyone to make a CPU for.

Also you can get K6-2 and K6-2+ CPU's to work on older Intel Socket 7 boards. You have to set the multiplier to 1.5 or 2.0 (which is interpreted as 6.0) and set the voltage as low as possible. That's how I got a K6-2+ to work on a Dell Optiplex GX60.
 
I could of told people that IB wasn't going to be a huge performance jump. It's because Intel already made a near perfect uArch with SB. The downfall to making such an awesome uArch is that it leaves you very little room for improvement. Whereas AMD with .. BD .. has significant headway.

So the question becomes, will AMD be reach that potential before Intel design's a new uArch. Only time will tell.

Intels die shrinks are never meant to do much in ways of performance, mainly just thermal and power usage. Thats why IB has a 77w TDP vs 95w at the same clock speed.

Sometimes they do a few tweaks and get a decent amount but if you have a second gen arch on a process its normally wont benefit to move to the new process until the new arch is put to it.

Such as Nehalem -> Gulftown. Not much in way of performance improvements but they managed to put 6 cores in 32nm with 130w TDP and using a bit less power than a quad core at 45nm. But comparing performance, it was about the same per core and per clock with Gulftown having about a 5-8% advantage in some areas.

But Sandy Bridge blew Nehalem away, mainly because it was a new arch on a mature process designed for performance.

Basically the first 22nm arch that will truly shine is Haswell. But if someone has a 4-5 year old system and is updrading right now, IB is the way to go due to prices being the same as current SB CPUs.

Whoah I missed this one somehow. No such thing ever happened. Intel refused to license Slot-1 / Socket-370 to AMD, thus AMD decided to further develop Socket-7 with 100Mhz bus speeds. The Taiwan chipset makers then incorporated AGP and ATX features into the Socket-7 and thus it was named "Super Socket 7" much to Intel's chagrin. They were hoping the movement to an exclusive socket would cut AMD out of the ATX / AGP world while increasing revenues from from the Slot-1 / Socket 370 licensing fees. Intel originally had been very liberal on selling perpetual Socket 7 licenses, that's why you had so many people making CPU's for it. Slot-1 and forward Intel refused to sell licenses for anyone to make a CPU for.

Also you can get K6-2 and K6-2+ CPU's to work on older Intel Socket 7 boards. You have to set the multiplier to 1.5 or 2.0 (which is interpreted as 6.0) and set the voltage as low as possible. That's how I got a K6-2+ to work on a Dell Optiplex GX60.

I never said that was the case, I just stated that was when AMD started to go into their own sockets instead of licesing them from Intel. As to if Intel didn't license them or whatever is another matter.

But Super Socket 7 was mainly designed with AMDs CPUs in mind while Intel went another route that AMD soon followed (Slot) and soon discarded (as did Intel since, well slot design sucked).
 
Super Socket 7 was just Socket 7 but it was still a AMD only socket, or when AMD started to push their own setups instead of using Intels sockets. So it would still count.

Your talents are seriously wasted in a big box store.

Cryix, IDT, Intel, AMD, and Rise all had CPU's on Socket 7. Socket 7 and "Super Socket 7" are the same exact sockets, the CPU's involved are interchangeable. "Super Socket 7" added 100Mhz Bus (up to 133/150 actually) that's all. AGP / ATX were motherboard specific functions and had nothing to do with the socket. 100Mhz+ Socket 7 CPU's were AMD K6-2/3 and Cyrix M-II.

AMD didn't chose not to license Slot-1 from Intel, Intel refused to sell licenses to anyone, AMD would of gladly continued making their CPUs on the same socket as Intel in order to be supported on their better boards. The 440BX was an amazing chipset for it's time.

AMD didn't "push" anything, especially over the non-existent Intel licenses. They were suddenly forced to create their own socket as AMD was expecting Intel to license out their next socket like they had for all the previous ones. The continued use of Socket 7 was a stop gap measure intended to allow AMD to maintain competition in the low end segment while they developed the Athlon and it's own socket. Do not act like AMD suddenly decided to go off and do their own thing.
 
There are plenty of things about AMD in general that are too easily taken for granted. Many amd boards have very wonderful I/O performance such as very responsive sata and ide controllers as well rather quick usb compared to many intel based boards. The differences are small for most people out there to notice but they do make up for some of the weaknesses of amd in general. There is plenty of reasons why amd was so popular a few years ago in the high end server and workstation market. Personally they need to merge their high end desktop line with the low end server line (example am3+ and socket G34) so there are fewer platforms that they have to support but also enrich the features available to both markets.

I agree with that. AMD have to play smart now, they cannot got toe to toe with Intel performance wise, but they can still out do Intel on the market stakes, first they need a product that can compete for market share, fingers crossed PD is a significant step.


 
Super Socket 7 was just Socket 7 but it was still a AMD only socket, or when AMD started to push their own setups instead of using Intels sockets. So it would still count.

Your talents are seriously wasted in a big box store.

Cryix, IDT, Intel, AMD, and Rise all had CPU's on Socket 7. Socket 7 and "Super Socket 7" are the same exact sockets, the CPU's involved are interchangeable. "Super Socket 7" added 100Mhz Bus (up to 133/150 actually) that's all. AGP / ATX were motherboard specific functions and had nothing to do with the socket. 100Mhz+ Socket 7 CPU's were AMD K6-2/3 and Cyrix M-II.

AMD didn't chose not to license Slot-1 from Intel, Intel refused to sell licenses to anyone, AMD would of gladly continued making their CPUs on the same socket as Intel in order to be supported on their better boards. The 440BX was an amazing chipset for it's time.

AMD didn't "push" anything, especially over the non-existent Intel licenses. They were suddenly forced to create their own socket as AMD was expecting Intel to license out their next socket like they had for all the previous ones. The continued use of Socket 7 was a stop gap measure intended to allow AMD to maintain competition in the low end segment while they developed the Athlon and it's own socket. Do not act like AMD suddenly decided to go off and do their own thing.

First off, I don't work in a big box store. I work at a small business locally owned and operated, much better since we tend to have people better suited for the job.

Second, I never said Intel licensed it I said both went with a slot design at some point which was proven inferior to the socket design.

Finally, I just said that Super Socket 7, while backwards compatible, was still designed for the K6-2 and K6-III CPUs, and the only way to run the CPUs at their maximum rated speed.

And I am sure AMD would have loved to follow Intel in socket diesign and be held back by a chipset not fully supporting their CPUs and ideas. At some point, even if their license agreement still existed AMD would have moved to their own platform as its the only way to get what they want out of the platform instead of relying on Intel or being just a clone.
 
apu are begging for memory bandwidth.
But this hd6750 is using 1600mhz 128bit gddr3 vram 😱
www.newegg.com/Product/Product.aspx?Item=N82E16814161395
it contains 720 spu, almost double than that of amd's top apu
so it doubts me that if gpu of apu are limited by tdp or ram bandwidth or market strategy (to keep low-mid range cards alive)

also
some apus are based on 40nm
while their is no 40nm cpu, so why not a 28nm apu (like trinity) so as to fit mord spu in apu and still under 100w tdp :??:
 
apu are begging for memory bandwidth.
But this hd6750 is using 1600mhz 128bit gddr3 vram 😱
www.newegg.com/Product/Product.aspx?Item=N82E16814161395
it contains 720 spu, almost double than that of amd's top apu
so it doubts me that if gpu of apu are limited by tdp or ram bandwidth or market strategy (to keep low-mid range cards alive)

also
some apus are based on 40nm
while their is no 40nm cpu, so why not a 28nm apu (like trinity) so as to fit mord spu in apu and still under 100w tdp :??:


Amd needs a better memory controller, Plus we need DDR4 to come out soon. We even noticed BD needs decent ram speeds the phenoms where never that hungry for ram speed, Trinity is going to need Fast ram that's for sure. Well i guess having no L3 cache(8MB of slowness) might help out latency though.
 
I can understand that, but that don't apply to me .
I know the real benefits I've received from the cpu I purchased over my old equipment.

So you are actually willing to trade off encoding speed over quality? For real?

QS is kinda cheesy, it's basically built for benchmarks. Awhile back Andrew Chew did an article about storage and encoding and I noted that the output file of QS was 2x the size of the output file for the software only encoder, but they both had the same encoding settings. After doing some digging, the on board hardware encoder tends to skimp on the compression. This means that when setting Q to static your QS files will be larger then software files (depending on codec) especially under higher Q. If you set Q to variable but file size to static then the QS will have lower quality vs the software codec. Also SB doesn't support 10-bit AVC only the common 8-bit one, this makes it a dead option as most of the studios are going over to 10-bit, heck most of the fan sub sites are now on 10-bit. Granted 10-bit AVC won't play on most mobile devices due to their HW encoders only supporting 8-bit so it's not yet emerged as a mobile standard.

QS is fine for home video converting and archival, maybe even a little side project here or there, but if your doing something serious I would stay away from QS. Then again if your a serious AV buff you already know this and have your own preferred setup.

And yeah Yuka those are fine tools. I've been using Staxxrip to do my re-encodes to get my Anime to play on my WDTV Live. They've all switched to 10-bit and my WDTV doesn't support that.

I have to re-encode all Hi10p for my S2, hahaha. And yeah, that with 4K support is lacking from the HD4K, so when using the new stuff that's getting out right now, the encoder won't do much for that either. At least they get into the game with the quality department in Hi8p, but the thing you say about size is really worrisome.

Well, last thing I re-encoded for the Sammy GS2 was To Aru (railgun and index). Now I'm doing it with nichibros (Danshi Koukousei no Nichijou); man that thing makes me laugh.

I'm experimenting with LAV as well, but LAV doesn't support anything from AMD, hahaha.

Cheers!
 
So you are actually willing to trade off encoding speed over quality? For real?



I have to re-encode all Hi10p for my S2, hahaha. And yeah, that with 4K support is lacking from the HD4K, so when using the new stuff that's getting out right now, the encoder won't do much for that either. At least they get into the game with the quality department in Hi8p, but the thing you say about size is really worrisome.

Well, last thing I re-encoded for the Sammy GS2 was To Aru (railgun and index). Now I'm doing it with nichibros (Danshi Koukousei no Nichijou); man that thing makes me laugh.

I'm experimenting with LAV as well, but LAV doesn't support anything from AMD, hahaha.

Cheers!

A Quote from me about quality

So let me get this correct I shouldn't take encoding into account when I purchase a cpu, even tho I perform that function 95% of the day (pretty much is how I make my money) Are you serious?

Look when I started out encoding my machine cpu was p4 3.2 478 socket with a AIW ATI 9600 if I was running that program once it took about 25 to 30 minutes to encode, well back then that program opened and running 6-8 at the same time that would take 2-3 hour.
Now fast forward to my machine today I can open that same program 9 time and (get better quality faster)
all 9 can be done at the same in no more then 15 minutes.
So I would totally disagree with you on this.
 
Status
Not open for further replies.