AMD Piledriver rumours ... and expert conjecture

Page 244 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 

Always good to see AMD's graphics division doing well.

I'm not sure I get this. AMD has the fastest single gpu on the market, in terms of gaming performance. In general, I would say that GCN is the better architecture than Kepler is because of how dynamic it is, it excels at many things while Kepler is rather limited.

Can you explain in more detail?
 
I use Metro as the best indicator of performance.....Stock 8150 is 3FPS faster than Stock 1100T and 1 FPS slower than a Stock 2700K, the game is hardware objectively neutral and hence gives out unskewed results.

Yeah Noob thats what it is, they are talking about millions to get into benchmarketing and taking on partners but thats millions lost to R&D. The good news is that the Unreal and Havoc engines will be fully partnered by AMD so I think we will see good performance gains.


But obviously people will cherry pick the results from titles using Intel and Nvidia partners.

Anyways I ran a stock 2500K with GTX 560ti on Skyrim fully maxed, it struggled to maintain 40FPS sometimes went below 30FPS spikes and got a bit choppy. Some woman claimed to play Skyrim fully maxed on 1080P with Intels GT2 HD5000 and playable, thats going to end badly, I can assure you the GTX 560ti is about "very big number "X stronger than any Intel iGPU solution.
 


I did stock and overclocked benches and got different results to the universal results, minus of course the ones grossly affected by Intel and Nvidia involvement. I also ran overclocks at around the 30% clock boosts AMD intended over previous generation chips, around the 4ghz mark. Basically AMD intended that clockspeed to compensate the loss of IPC due to deep pipelines the same thing Intel did with Prescott way back then, but Zambezi was rushed and they didn't achieve stability at the intended 4ghz mark. If you take the 4170 over the 4100 its IPC's significantly improve, a 8150 at 4.3ghz has IPC's extremely close to Intels concurrent mainstream chips, that in itself is very impressive. There are many ways of achieving faster per clocks, AMD are doing this by higher clocks, Intel having money did it on the metal level, either way its achieved is irrelavent. The simple fact of the matter is a Intel chip may be better but its certainly far from the numbers branded around.

As before if intel was 23/29/35/55% faster (some of the nonsense I have heard) then AMD would have stopped CPU manufacturing altogether and conversely Intel would not be pushing. They have seen APU evolution as a massive threat particularly as to how well it has been recieved in far eastern sides. Intel needs 75% marketshare and will achieve much of that through benchmarketing, its the easiest way of procuring a market.
 


Intel's solution to everything.....throw cash ....errrr cache at it. If throwing cache really was the solution the Radeon and Kepler will be in a different zip code altogether.


 

So essentially, despite all its delays, BD still wasn't ready. Now, with a bit more time, PD is what it was supposed to be.

PD: BD had it lived up to the hype, now without the hype!
 
Don't really know what people are talking about when they say that BD runs hot. Unless they mean on the stock hsf or when heavily overclocked. My 8120 has never broken 28 Celsius, that was probably in a 30-35 Celsius ambient. Right now (about 20 ambient) its at 12 Celsius under a light load.
 

Oh the 8150 WILL run hot if you OC >4.5 Ghz with added voltage etc. But if kept within it's specs I agree it is not hot considering the cores/modules. BTW, I have mine cranked to 4.5 Ghz (21 x 215 at 1.4 v) with very good cooling H100 and I'm fine no matter what I throw at it.
 


The intended part for the highest end Zambezi's were to be 2billion transistors large, to run stock clocks of 4-4.2ghz and 4.5ghz boost stable, while having vast overclocking headroom. At those high clocks the deep loss in IPC due to a deep pipeline would have been compensated for, at those clocks would easily have beaten the Intel IPC's, some would say its higher clocked, but it would still have only been stock clocks so the issue of higher clocks is rather irrelevant.

The reason why this never happened is simple, bad management and delays in the 32nm process with Global Foundries, the resultant was plan be and that was rushed. The end result is the highest end Zambezi's only have 1.2 billion of the 2 billion transistors intended. Fast forward, new management devoted to the Vision, a divorce from Global Foundries going to a mature TSMC process, a changed front end on the SR chips could see SR very well be the big release all AMD users are waiting for, the problem with that is, its likely going to be a expensive chip.

 
So...

PD will be a slight upgrade and will be around the same power and temps than what I see now with my PhII, but I'll have 4 extra "cores" and a bigger epeen. That sounds good, haha.

Any confirmed release date? I'll go traveling to Costa Mesa next week, so I'd like to know if I'll see the release in that time, hahaha.

Cheers!
 


At 4Ghz my FX 8150 solomnly trounces my 1100T around 3.8Ghz but churns out better performance than the Intel Quads, considering it was supposed to be clocked 4ghz and up already I guess the fair benches would be at the intented speeds. I am still pissed about the skimped transistor count.....1.2billion is far off from 2billion. Should never have been released until they perfected the process. At 4.2ghz this chip is hot.


 

in terms of gaming performance, 7970 and gtx 680 trade blows. 7970 cannot significantly outperform gtx 680... not to mention the gtx 680 uses less power to do that.
kepler, rather the kepler gpus available in nvidia's mainstream gaming gfx cards are indeed limited - those were designed that way. gcn is nothing to dismiss either. gcn is the reason amd is so close to catching up to nvidia... or at least it looks like that from the outside. when nvidia noticed how good gk104 is (in gaming and power use), they decided to market it as their top of the line gpu. nvidia could have heavily undercut amd if they decided to market gtx 680 as upper mid range gpu. thanks to poor yield and some good decision-making, they ended up selling it as high end gpu. this favored amd too... but nvidia gained much more.
 

Well, lower power per core. I assume you mean around the same power from your 4 to PD 8, because the PD quad at launch is going to be 95w. Yes, that is what it seems like. I haven't used any high end Intel-based systems, so I can't really compare, but multitasking is seriously smooth as butter on my system. Someone with experience could probably tell me that on any good system it is, but that's okay.

Date seems to be Oct. 17 based on the rumor we saw not too long ago, nothing confirmed yet I believe.
 
More transistors, faster clocks, more game partners that is all they had. As GCN evolved GK104 started to look more like a plastered over wall. Power consumption at load is like 10w better, at idle they don't come close to zero core power technology. The problem Nvidia has is Physx is dying, AMD are OpenCL heavy, havoc and unreal engines favour OCL, Dirt 3 Showdown the 7970 is on a completely different level to the 680 I guess you can say it is optimized for AMD, but Intel/Nvidia have been pedalling that bike for years.

With AMD taking on plenty partners now that means less for Nvidia and less people putting money in their pockets. This round they have been out manoeuvered so much so they made the call to Intel to share fab process and thats not a good place to be, basically be milked to nothing.
 

http://www.tomshardware.com/reviews/radeon-hd-7970-ghz-edition-review-benchmark,3232.html
Trade blows, yes. The 7970Ghz is faster most of the time though. The 680 using less power doesn't make it faster than the 7970Ghz. I don't think anyone was saying the 580 was slower than the 6970 because it used more power.

I say that "the GK104 is mid-range, AMD seriously blew it" nonsense is nothing but speculation, because it is probably one of the worse business practices I've ever heard of, if it is true. I think Nvidia fans might be a tad bit upset they lost their performance crown, and they are losing the price war.
 
Overclock temp is irrelevant when we talk about factory chips and the way they got developed. Also good to see AMD GPU division putting up a good fight. If they released good drivers withing a week of release, the tide would turn into their favor for sure. Often people gain like 5-10 fps in games by updating their drivers a couple of months after the game's release. early drivers are bad.
 



Tad bit upset? LOL More like suicidal
 

load power is "like 10 watts better"? really? check the link viridiancrystal posted. :)
intel, amd, nvidia all tout performance from the optimized games. all of them doing are doing it for years. my point was about how nvidia is utilizing kepler to make money and amd competing with the new gpus in terms of price. where did this physx vs opencl, nvidia calling intel for help come from?... nvm, don't answer that.

whoa... when did i say 680 is faster because it uses less power... all i said was amd is competing in terms of price because they realized how powerful kepler arch is in terms of raw gaming perf. i'll add "nvidia's usual douchery with optimization, drivers, pr, isv-strongarming etc makes kepler even more formidable" to that.
i did say that the 7970 (you're right, i was talking about the ghz ed.) does not significantly outperform gtx 680 and the gtx 680 uses less power and they trade blows. either gpu is very fast... and that wasn't even my point.... although i tend to like hardware that perform equally/better using less power...even if it's from nvidia lol.
anyway, the bit about "I say that "the GK104 is mid-range, AMD seriously blew it" nonsense is nothing but speculation, because it is probably one of the worse business practices I've ever heard of, if it is true." amd didn't "seriously blew it", imo them offering their highest end gpu for mainstream users makes them more honest than nvidia. and gk104 wasn't mid range, it was upper midrange asic... or at least it least it was supposed to be... until nvidia tested them in their labs. whether it's bad business practice... well.. both nvidia and amd are corporations so i'd say it's typical of nvidia.
nvidia can use gk104 for workstation gfx cards and use tesla(kepler) cards to make up for gk104's shortcomings in dp(iirc). i think that they're doing exactly that with newer quadros and their "maximus" technology. that way, they can b.s. pro card buyers into buying two cards instead of one thus making moar moniez. meanwhile amd's poor driver support bites them in the rear as gcn workstation cards struggle to compete with fermi workstation cards despite having very good if not great hardware.
 
We will see if Abu Dhabi money helps out, but basically AMD need a better process whether TSMC or GloFo. Winds of change is probably right but AMD are behind on their project. BD was not the intended 2 Billion transistor 4ghz juggernaut intended, but its step 1 of 4 so I guess its an evolution.
 

:lol:

True, true. I think that it is fair to say the GTX 600 series and Radeon HD 7000 series are very good competition for each other, whatever the case with drivers or GKxxx speculation.
 



really because my OC'd1100T beat up a OC'd 8150 at Cinebench 11.5 in multicore rendering and single core IPC

What benches did you use and what scores did you get?
 
Status
Not open for further replies.