AMD Piledriver rumours ... and expert conjecture

Page 86 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
"Nvidia the Way it is Meant to be Paid"


BTW I still have fond memories of my last Nvidia card
it was a XFX 7600GS AGP that was an OCing monster
and if finances werent a problem I would more than likely have
a dual 460/560 setup

just so you know that I am not a AMD fanboy :)
 
I dont know if this is old news
sorry to go off topic

http://www.nordichardware.com/news/71-graphics/45445-geforce-gtx-680-qgk104q-specifications-leaked.html




something more on topic

quote-

This acquisition of shares is the latest in a series of announcements marking significant milestones as GLOBALFOUNDRIES continues to gain strong momentum in the global semiconductor industry: ' AMD's 32nm processor shipments increased by more than 80 percent from the third quarter to the fourth quarter and now represents a third of AMD's overall processor mix. In fact, GLOBALFOUNDRIES exited 2011 as the only foundry to have shipped in the hundreds of thousands of 32nm High K Metal Gate wafers.

source-
http://www.electroiq.com/semiconductors/2012/03/06/globalfoundries-marks-third-anniversary-by-achieving-full-independence.html
 
Hmm, sounds interesting. My company is increasingly reliant on telecommunication tools although at the moment we are sorta stuck on WebEX. However bandwidth is something of a problem at the moment. My telecommuting equipment consists of a VOIP connection to my office phone, over the VPN connection from home and since we already have maybe 5000 telecommuters it's a strain on the office bandwidth. My ISP is Verizon FIOS and the 15/5 is more than sufficient bandwidth, but the office doesn't like any non-work related stuff like Youtube or music streaming due to the bandwidth..

Also good for PS4, to combat Kinect.
 
"to be honest, I can put Kepler on the back burner and not even jump when they are first released.
I'm going to SLi my N560GTX-Ti HAWKS this weekend or next, move them into my 2500K unit; then move my SLi NGTX460 HAWKS into my AMD unit (965BE).
I might pull the trigger this weekend if I get lucky.
still gotta move to a few SATA3 drives too.
anyways"

I knew you were going to do that
good move
hope it is an easy install :)
 
trinity = e-fx4 + gpu = but still 65w for whole, and thus aprox 35-50w for cpu
fx4 is 95-125w

means they managed to fit e-fx4 derivative + gpu in under 65w,

that means lot of power saving (in pd). An probabily higher clocks.

Yeah that's great power savings if true. Probably the most significant news on PD in the last 2 months.
The top end Llano is 100W now @ 3Ghz.

 
~IF~ (and that's one hell of a big if) PD has the performance gains / power reduction that's being advertised, then it bodes well for new mobile APU's. Biggest limiter was sharing the thermal ceiling between the GPU and CPU, with the CPU producing lower heat, it should produce a significantly faster APU. That's all a big "IF" and should be taken with a grain of salt.
 
trinity would have skipped a cpu generation with piledriver cores - llano has athlon/phenom cores. from what i've been reading, it seems to be a very good thing. if amd has succeeded containing a 95w cpu (using 4100 as reference) and an 80w gpu (hd 6750 ref.) inside a 100w tdp (5800k) and 65w tdp (5700) - that would be amazing to say the least.
as for the new 4170, i can't help but think amd's going the opposite way of their earlier claim of efficiency. a dual module 125w cpu is ridiculous. it's kinda okay for the 8150 and the upcoming 8170 because of quad module and 8 cores. even more ridiculous is that the cpus don't have aything extra like a igpu or pcie controller or.. 800 million transistors.. i know i lack in depth knowledge about cpu arch, still i think it's ridiculous.
 
as for the new 4170, i can't help but think amd's going the opposite way of their earlier claim of efficiency. a dual module 125w cpu is ridiculous. it's kinda okay for the 8150 and the upcoming 8170 because of quad module and 8 cores. even more ridiculous is that the cpus don't have aything extra like a igpu or pcie controller or.. 800 million transistors.. i know i lack in depth knowledge about cpu arch, still i think it's ridiculous.

A minor consolation considering the overall performance but the FX-4170 is the fastest clock CPU shipping right now.

4.2Ghz/4.3Ghz(Turbo)

They probably had to bump the voltage to get those speeds stable so that is where the extra TDP comes from.
 
Trinity's cpu cores are based off of PD cores, thus all the discussion in the PD thread.

2 things about this:
1. Specs don't lie, by which i mean: AMD cannot just say a products TDP is 65 and have it be close to 100.

2. If a Quad core bulldozer FX is faster than llano, than a higher clocked PD quad core will be faster, unless AMD found a way to go backwards on the same arch.

Whatever the case, AMD has found a way to save power, which looks good for a PD improvement.

Actually they can. Unless they changed again, AMD tends to use ADP, Average Disipated Power vs TDP, Total Disipated Power.

That means that AMD calculates the average power a user will use while Intel rates maximum power a user will use.
 
Actually they can. Unless they changed again, AMD tends to use ADP, Average Disipated Power vs TDP, Total Disipated Power.

That means that AMD calculates the average power a user will use while Intel rates maximum power a user will use.

And if an OEM vendor design's a thermal solution to dissipate 125W of power and the CPU at stock creates more, then AMD becomes liable for a serious lawsuit from said OEM.
 
A minor consolation considering the overall performance but the FX-4170 is the fastest clock CPU shipping right now.

4.2Ghz/4.3Ghz(Turbo)

They probably had to bump the voltage to get those speeds stable so that is where the extra TDP comes from.

I know my 4100 does very well undervolted @ 3.9 but anything past that and the power required jumps a lot.
 
Actually they can. Unless they changed again, AMD tends to use ADP, Average Disipated Power vs TDP, Total Disipated Power.

That means that AMD calculates the average power a user will use while Intel rates maximum power a user will use.
The A10-5800K's TDP by leaks says 100w, and the 5700 now says 65w. Somehow i doubt that there is a 35w average power use between them.

I'm 100% positive that one of the two numbers is way off, and its most likely the 65w number. Looks like more marketing shenanigans.
 
Call BS on JS again.

http://en.wikipedia.org/wiki/CPU_power_dissipation

Datasheets normally contain the thermal design power (TDP), which is the maximum amount of power the cooling system in a computer is required to dissipate. Both Intel and Advanced Micro Devices (AMD) have defined TDP as the maximum power consumption for thermally significant periods running worst-case non-synthetic workloads. Thus, TDP is not the actual maximum power of the processor.

Processor manufacturers usually release two power consumption numbers for a CPU:

typical thermal power, which is measured under normal load. (for instance, AMD's Average CPU power)
maximum thermal power, which is measured under a worst-case load.

From AMD's own technical white paper

http://support.amd.com/us/Processor_TechDocs/43374.pdf

TDP. Thermal Design Power. The thermal design power is the maximum power a processor can
draw for a thermally significant period while running commercially useful software. The
constraining conditions for TDP are specified in the notes in the thermal and power tables.

From Intel's technical white paper

http://www.intel.com/Assets/en_US/PDF/datasheet/313079.pdf

Thermal Design Power – Processor thermal solutions should be designed to meet
this target. It is the highest expected sustainable power while running known
power intensive real applications. TDP is not the maximum power that the
processor can dissipate.

Both Intel and AMD use TDP to define how much cooling is required for their CPUs. No secret agenda, no lies, no shenanigans, no "Intel is honest AMD is a dirty ----".

After that last statement you made I must now ask how much your getting paid by Intel / Intel representatives. There is no amount of stretching or mental gymnastics that can be done to justify your blatantly misrepresenting the truth like that. You said you worked at a store and I'm beginning to think your salesman.
 
To further clear up the crap JS is spreading.

http://en.wikipedia.org/wiki/Average_CPU_Power

ACP is a term used by AMD to calculate average daily power for server farms.

AMD claims the ACP rating includes the power consumption when running several benchmarks, like TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth) [1][2][3] which AMD said is a better method as a power consumption measurement for data centers and server intensive workload environments. AMD has said that the ACP and TDP values of the processors will co-exist, and do not replace one another. All server products will see two power figures starting from the codenamed Barcelona server processor onwards.

ACP is for server CPU's in data-centers so that data-center owners / product venders can properly plan cooling requirements. With modern CPU's changing their clocking based on load and cooling being expensive, you will over-provision if you try to calculate your cooling requirements based on 24/7 max usage. We don't actually calculate cooling requirements ourselves, we have the venders provide us with the cooling requirements for their offerings. It has absolutely ZERO bearing on desktop / mobile CPU's. When AMD says "TDP" they mean exactly that, "TDP" as defined by their engineers, not JS's definition.
 
bravo

glad we can clear that out lol.

also Ill just leave this here:
http://hothardware.com/Reviews/AMD-Fusion-A83500M-ASeries-Llano-APU-Review/?page=13

and also:
http://www.tomshardware.com/reviews/a8-3500m-llano-apu,2959-22.html

Power%20-%203DMark.png

both 35w mobile processors
 
I know my 4100 does very well undervolted @ 3.9 but anything past that and the power required jumps a lot.


That seems to be the clock threshold for the current 32nm process and the FX design.

Power ramps quicker between 4.0 and 4.5. After 4.5 it ramps even quicker and by 5.0 it's through the roof (Double the power draw).

Intel charts I've seen show similar, but not quite as exaggerated by5Ghz.

It will be interesting to see how the Trinity clock-mesh power reduction helps the OC performance.
Likewise for Ivy and the Tri-Gate advantage.
 
serious question.
with the difference in power consumption shown, does it compensate for the difference in CPU performance.?
doesn't the Intel CPU crunch harder than the AMD CPU and if so then how much, or does the AMD CPU also crunch more.?
not speaking on the graphics part but the actual CPU.

The AMD's 35W TDP is shared between both the Phenom II x4 cores and the 6620G GPU. As you can see in the charts it's really a 20~25W CPU but they must leave room in the specification for the GPU or face potential lawsuits from OEM's and customers. This is also the reason the mobile APU's can OC like crazy. If it gets too hot it will clock itself down to prevent burn out, otherwise it's performance characteristics are entirely in your hands.

Also remember, its just a 32nm Phenom II x4 @1.5Ghz. My 3530MX performs much better across the board, you can pretty much force it to use 2.2Ghz as it's base clock (P0) with 2.8Ghz as it's boosted state (B0). Then you can tweak exactly when and under what conditions it'll boost. I've been successful at getting it to run @2.8 on single threaded applications, but in it's stock state it won't boost often. Easiest way I know to force boost is to set core affinity to prevent windows from moving it around, keep it on one core and it'll go to 2.8 and the other three will go to lowest P state (600 to 800Mhz). I know some people who use the mobile dGPU (6750 / 6770) and can get it to boost to 3.0Ghz.

When you put that all together you realize the i5 is not only a superior uArch (SB vs K10.5) but also has higher thermal headroom to flex it's muscles. This is one area that having a 5000 series GPU (Redwood) integrated GPU limits your performance (thermal headroom).
 
That seems to be the clock threshold for the current 32nm process and the FX design.

Power ramps quicker between 4.0 and 4.5. After 4.5 it ramps even quicker and by 5.0 it's through the roof (Double the power draw).

Intel charts I've seen show similar, but not quite as exaggerated by5Ghz.

It will be interesting to see how the Trinity clock-mesh power reduction helps the OC performance.
Likewise for Ivy and the Tri-Gate advantage.

It's physics kicking in. The more ambient energy present (heat) the harder it is to put a current down something without it being ionized (plasma). Thermal resistance increase's as heat increase's. The more juice you put down the wires the hotter it gets and the harder it gets to put even more juice down the wire. This is why CPU's are largely limited by thermal conditions and why going to smaller node's is so important for performance expansion.
 
I'm wondering now... Why does people tend to say "hey, nVidia can use 300W TDP worth of heat and i'll just feed it cause hey, we're enthusiasts" and when it's about AMD using more power it's "AW GAWD! IT'S HORRIBLE! KILL IT!".

I know there are performance differences and all, but it's still funny to say the least, lol.

Cheers!
 
To further clear up the crap JS is spreading.

http://en.wikipedia.org/wiki/Average_CPU_Power

ACP is a term used by AMD to calculate average daily power for server farms.

AMD claims the ACP rating includes the power consumption when running several benchmarks, like TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth) [1][2][3] which AMD said is a better method as a power consumption measurement for data centers and server intensive workload environments. AMD has said that the ACP and TDP values of the processors will co-exist, and do not replace one another. All server products will see two power figures starting from the codenamed Barcelona server processor onwards.

ACP is for server CPU's in data-centers so that data-center owners / product venders can properly plan cooling requirements. With modern CPU's changing their clocking based on load and cooling being expensive, you will over-provision if you try to calculate your cooling requirements based on 24/7 max usage. We don't actually calculate cooling requirements ourselves, we have the venders provide us with the cooling requirements for their offerings. It has absolutely ZERO bearing on desktop / mobile CPU's. When AMD says "TDP" they mean exactly that, "TDP" as defined by their engineers, not JS's definition.

No need for the anger there man. I was wrong. Forgot its their servers that do that.

Doesn't mean they will never switch since average DT users will never hit the max TDP consitently.

And trust me, after working on as many systems as I have I can tell you the OEMs never provide more cooling than the average user will need unless its a "gaming" PC. Even then they still have some pretty bad cooling on those too.

And Yuka, I think nVidias power usage is horrible too. It just shows that the arch is inefficient, much like NetBurst, Barcelona or Bulldozer were/are. Of course I don't think it will change with Kepler. nVidia tends to use more power than ATI/AMD does, which is another reason I like to go with Radeon. No need to spike my power bills.
 
I'm wondering now... Why does people tend to say "hey, nVidia can use 300W TDP worth of heat and i'll just feed it cause hey, we're enthusiasts" and when it's about AMD using more power it's "AW GAWD! IT'S HORRIBLE! KILL IT!".

I know there are performance differences and all, but it's still funny to say the least, lol.

Cheers!


People are getting more price/power conscious these days, and there is a range of enthusiasts from budget enthusiasts to got-to-have-the-fastest-rig-possible enthusiasts.

Computing has reached a point where a 300Watt anything is probably overkill.
Does running BF3 @ 200fps make the game any more fun?

I like seeing the performance increase a $750 rig gets from year to year. Also there's more tech toys to balance budget across now.
 
Status
Not open for further replies.