AMD's AM4 Socket Comes To Fore; HP and Lenovo Shipping 7th Gen Bristol Ridge APUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

srmojuze

Commendable
Jul 28, 2016
26
0
1,540


Good point, under PC Mark 8 they say the A12 has equivalent performance to a i5-6500 65W but suddenly they claim it has "32% better efficiency" than a i5-6600K 91W which is really apples and oranges.

I'm not saying any one company is perfect but it's getting somewhat annoying with this graph shadow dancing.

Someone at Wccftech comments (a "vibrant" area no doubt) has also identified either a confusing bit, a typo or something more nefarious in the AMD slide endnotes which relates to the switcheroo between the i5-6500 and i5-6600K.

In any case, I shan't comment more as I don't want to get into AMD vs Intel, it's just that it's important to note the above when looking at graphs.

Cheers.
 

Sam Bittermann

Honorable
Aug 15, 2013
37
0
10,530
I really appreciate Tom's last paragraph that no other website has bothered to ask. When? AMD is great at pretty slides and magical numbers that don't quite add up to reality with actual reviews.
 

TJ Hooker

Titan
Ambassador

Yes, power consumption and heat production are essentially 1 to 1. As @bit_user mentioned, there may be small amounts of power dissipated in other forms (e.g. radiation), but virtually all of it is converted to heat. Different materials/resistance may change how much power is being consumed, but that doesn't change the fact that power consumption = heat produced. The only exceptions to this in electronics I can think of off the top of my head would be anything with an antenna, where significant amounts of energy could be radiated, or if you're charging a battery.


Instantaneous power consumption/heat and instantaneous heat dissipation aren't always equal. If they aren't, the CPU (and heatsink, after a delay) will change temperature. However, let's assume that a CPU under load has reached an approximate steady state, i.e. power consumption is fairly steady and the temperature has risen to some value and then leveled off. In order for the temperature to not rise any further, 100% of the heat being generated must be dissipated. Given that an appropriate cooler ought to be able to keep a CPU from overheating while under heavy, sustained load, it follows that the TDP should be >= max power consumption. However, a cooler capable of dissipating more heat typically costs more, so the TDP shouldn't be (much) greater than power consumption, in order to specify the lowest cost cooler that's sufficient for cooling the CPU.
 

80-watt Hamster

Honorable
Oct 9, 2014
238
18
10,715


The point is that for a specific workload, the A12-9800 is able to match performance of an i5-6500 within the same thermal envelope. You're correct that both chips may not have consumed the same amount of power. However...

TDP and power consumption are related. Just not equivalent. Everything comes back to energy. Watts is energy over time, and in an integrated circuit, that's all dedicated to moving electrons around. Since electrons have negligible mass, an insignificant amount of that energy is converted to kinetic energy. Once the system is powered up and the capacitors are full we're not really storing much anywhere, so it's not becoming potential energy, either. That leaves us with thermal energy and light. If your IC is producing light, something is very wrong, which leaves thermal energy, which is typically measured in joules. Joules over time brings us back to watts. If both chips were given a full load and share the same thermal envelope (TDP), we can guess that the power consumption was similar.

And if you're not dissipating 100% of the heat from the CPU, I'd like to know what you're doing with the rest of it.
 


AM3+ CPUs work in AM3 sockets and AM3 CPUs work in AM3+ sockets. This has been going on with practically all previous AM sockets (and more) for over a decade. AMD even let us use AM3 CPUs in AM2+ and some updated AM2 socket boards because AM3 CPus had both the new DDR3 memory controller and a legacy DDR2 memory controller. They might not do that again with AM4 CPUs having a DDR3 controller, but that shouldn't be surprising based on how many other things in the chipset needed updating this time, especially with APUs that need the faster memory for better graphics performance. It would be surprising if AMD made AM4+ and AM4 incompatible.
 
Heat dissipated is basically equal to power consumed. This is a general fact of electricity because "using" electricity transforms it into another type of energy, typically heat. If I put 100W of electricity into a CPU, I get about 100W of heat out of it. However, TDP is only loosely related to both of those. A CPU's power consumption (and thus heat dissipation) can vary widely between workloads and in most situations, will never reach the actual rated TDP of the CPU. Then again, some situations (such as heavy floating point workloads on newer Intel CPUs) can push some CPUs way past their TDP in power consumption and heat generation.

Other things to keep in mind include that different companies calculate TDP differently (or even the same company with different products) and at best, it is only ever a very approximate number. Basically, TDP is technically almost useless for comparing anything realistic except for very large differences (such as 95W and 220W). If you want to compare things like power consumption between two CPUs, then that needs to be measured if you want any accuracy.
 

zozzlhandler

Distinguished
Dec 14, 2006
38
0
18,540
We desperately need AMD to succeed, unless you enjoy paying $1000 for an Intel enthusiast CPU. It doesn't have to be the fastest, but competitive and good value.
Looking good so far, but as always the devil is in the details.
And remember, nothing succeeds like a beakless parrot...
 

bit_user

Polypheme
Ambassador
So, here's my take on this, in case it helps.

A CPU that's powered on will generate heat. At a given load, it will do this at approximately a constant rate. Once the heatsink has reached a steady-state temperature, it will give off the same amount of heat that it receives (pretty much steady state, by definition). The important point is what's the thermal gradient between its fins and the base. An inefficient heatsink will have a high thermal gradient, meaning it won't reach steady state until higher temperatures, which might be high enough to overheat the CPU. This is why you need a more efficient heatsink, for higher-power CPUs.

BTW, Conservation of Energy dictates that all electrical power consumed by a device will be transformed into some other form of energy. Since heat is higher-entropy, that's where it tends to go.
 

darth_adversor

Distinguished
Jan 16, 2012
74
1
18,635
The idea of a competitive, relevant, resurgent AMD (at least on the CPU side of things) is exciting. My i5-2500K proved to be one of my best purchases ever, but if Zen pans out, I'll probably go back to AMD.
 
POWER and HEAT. The power drawn from the wall is IDENTICAL to the heat produced. It's called the Law of Energy Conservation. (on a monitor a small amount is turned into light)

TDP. This is simply the worst-case design point, however several different Wattage parts are bundled into this for ease of design. The weakest one might have a 95W TDP yet only produce 50W.
 

WacWac

Commendable
Sep 9, 2016
1
0
1,510
If, they don't release, a APU wit RX 460 capable of crossfire with 2 more RX 460 PCIe 3.0 x8 and AM4 capable of 2x PCIe x16 3.0 with DDR4 and NVMe SSD, ... they are USLESS and working for nothing at AMD laboratories.
 
An RX 460-type APU would add 6 CUs and (roughly) 22% to the CPU die size to the AM4 Bristol Ridge - that ain't happening.

PCIe 3 adds roughly a 4% increase in some games at 1080p - you can make up that difference 2X by simply buying faster RAMs.

Other than that, you swung and you missed.

 

bit_user

Polypheme
Ambassador
When, with what graphics hardware, what CPU, and what API?

I remember some benchmarks to that effect, back when PCIe 3.0 first launched (like 4 years ago), but GFX cards are now a few times faster and we have a whole new generation of APIs.

If you don't have current data, I wouldn't keep trotting out that 4% figure. If you do, please link.
 

TJ Hooker

Titan
Ambassador

https://www.techpowerup.com/reviews/AMD/R9_Fury_X_PCI-Express_Scaling/18.html
https://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/21.html
http://www.guru3d.com/articles_pages/pci_express_scaling_game_performance_analysis_review,9.html
http://techbuyersguru.com/taking-4k-gaming-challenge-gtx-980-ti-sli?page=3

Now, those links are all 1-1.5 years old, meaning they don't include any DX12/Vulkan games or have the latest, most powerful GPUs. But they all show negligible performance differences between PCIe 3.0 vs 2.0. I doubt things have changed that much since then.
 

bit_user

Polypheme
Ambassador
Thanks for the links. It's worth noting that the fastest cards tested were Fury X and one test looked at a 980 Ti @ 4k.

Since then, CPUs have gone from Haswell w/ DDR3 to Skylake w/ DDR4. And GPUs have gone from 28 nm to Nvidia's 16 nm. When you increase the speeds of the devices on both sides of the same link, it tends to become a bottleneck. It's not going to be a night and day difference from those benchmarks, but should show a greater impact.

The bigger concern is VR, where double/triple buffering cannot be used to hide the lower throughput and higher latency of a slower interconnect.

BTW, I view Nvidia's new NV Link as a testament to the bottleneck of PCIe 3.0 for GPU compute applications.
 

Jorge Nascimento

Reputable
Mar 18, 2014
43
0
4,540
The base memory speed for all non gaming/performance motherboards on skylake is 2133Mhz(H110/B150/H170). The base memory speed for AMD4 will be 2400Mhz, i think this explains everything!
Ofc the top tear AMD4 mobos for zen summit ridge 4c/8mt and 8c/16mt will support the overclocked versions just like the Z model on intel. But what makes it better in low budget mobos is the fact that the base memory speed is 2400 making is faster then the 2133 of intel skylake.
 
Status
Not open for further replies.