News AMD Warns of $1.1 Billion Q3 Shortfall, Cites PC Market and Supply Chain

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Id question if you were even 10% sure on vrms. Cpu temps have nothing to do with VRM temps and efficiencies. It's the amperage being pulled by the cpu that affects how hard the vrms must be run, and it's up to motherboard manufacturers to design the vrm based off of guidelines provided by amd.

Buildzoid posted a video a week ago showing just how overbuilt x670 motherboards vrms are. He ran a 7950x flat out with the vrm heat sinks removed and no airflow, and temps on the vrm topped put in the mid 70s.
View: https://youtu.be/7wRn1bkqXNA
Seems like I got something confused, thank you for the correction.
Great Video.

Please do some research instead of spreading FUD
This is why I initially threw in the disclaimer and another one at the end.
If you consider that intentionally spreading FUD then you need to dial it down a bit, otherwise we don't need this public forum.
 
It's the amperage being pulled by the cpu that affects how hard the vrms must be run, and it's up to motherboard manufacturers to design the vrm based off of guidelines provided by amd.
Why Ampere, doesn't the VRM regulate and stabilize along with the chokes and caps the voltage?
Regulating voltage is what really matters not amperage or what am I missing here?
 
The number of people who need a faster CPU or a high end GPU has to be really limited by now.
Other than professionals that actually save time/money on a CPU/GPU intensive task, who is really in the market for a new PC at the moment?
There is a limited market for people who spend 5000$ a year on PCs so they can brag about running games with extreme RTX settings when the same games look great at high settings/low RTX.

This is going to affect AMD/Intel/NVidia profits.

Intel fans needing a significant bump in performance have had good options with Alder Lake and DDR4 boards for a year now.
Most people won't even notice the difference from 13th gen, even if it is theoretically much faster.

AMD fans have had cheap upgrades to Ryzen 3000 and Ryzen 5000 for three years now.
Most people won't notice the difference between a Ryzen 3600 and a 7950.
There were significant increases from Ryzen 1xxx and 2xxx and 3xxx/5xxx in real world situations, but anyone needing an upgrade probably jumped on 5600/5700 or sales on 5800x

Anyone who needed a GPU upgrade for 1080p or 1440p gaming has had options for 6 months now
With FSR2/DLSS2, mid range cards have been good enough for 4k gaming at high settings
Anyone who had an AMD 580/Nvidia 1060/1070 and felt a need to upgrade probably upgraded once 6600/6700/3060TI/3070 became available at close to MSRP.
 
  • Like
Reactions: ikernelpro4
Why Ampere, doesn't the VRM regulate and stabilize along with the chokes and caps the voltage?
Regulating voltage is what really matters not amperage or what am I missing here?
While they are voltage regulators they do have to supply that voltage to the CPU with the power the CPU asks for, 1.5Vcore at 100W is much less strain than 1.5V at 230W , the first would do ok with 70 amps VRMs while the second would need to be able to handle 150 amps.
But 1.5Vcore at 230W at 100° is just the same as 1.5V at 230W at 70° if the VRMs are made for 100° that is, temps will not affect any other rating, outside of highly theoretical.
 
  • Like
Reactions: ikernelpro4
The number of people who need a faster CPU or a high end GPU has to be really limited by now.
Other than professionals that actually save time/money on a CPU/GPU intensive task, who is really in the market for a new PC at the moment?
There is a limited market for people who spend 5000$ a year on PCs so they can brag about running games with extreme RTX settings when the same games look great at high settings/low RTX.

This is going to affect AMD/Intel/NVidia profits.

Intel fans needing a significant bump in performance have had good options with Alder Lake and DDR4 boards for a year now.
Most people won't even notice the difference from 13th gen, even if it is theoretically much faster.

AMD fans have had cheap upgrades to Ryzen 3000 and Ryzen 5000 for three years now.
Most people won't notice the difference between a Ryzen 3600 and a 7950.
There were significant increases from Ryzen 1xxx and 2xxx and 3xxx/5xxx in real world situations, but anyone needing an upgrade probably jumped on 5600/5700 or sales on 5800x

Anyone who needed a GPU upgrade for 1080p or 1440p gaming has had options for 6 months now
With FSR2/DLSS2, mid range cards have been good enough for 4k gaming at high settings
Anyone who had an AMD 580/Nvidia 1060/1070 and felt a need to upgrade probably upgraded once 6600/6700/3060TI/3070 became available at close to MSRP.
Yep exactly, I personally have a 1600 and don't really feel an upgrade is a must since it can handle everything.

It would merely be a present for a forgotten birthday and I suppose an extra GHz as well as 4 extra threads wouldn't hurt either for VMs etc. But even then, you're better off waiting for a sale when upgrading from a 1xxx Ryzen CPU, it's not a big deal and christmas/black friday is literally around the corner.

Most people have either bought an nvidia 1000 or even better 2000 series GPU (which is by itself good enough but DLSS2 will future proof even further).
I don't think anyone who got a good deal on a recent AMD Gpu has any interest in buying a new GPU again as well.

There isn't any notable upcoming games that will require a completely new setup as well, so why bother burning money?
If anything people will just buy better coolers, water cooling, new fans, hdd's/ssd's and other peripheral
 
Last edited:
And yeah, I am aware that I could save some money if I would go for last gen motherboard and CPU, with it being plenty good in particular for gaming. But around the mid-range, the price difference is usually not that big, and PCIe 5.0 for M.2 SSD is quite tempting to go for straight away.
I kind of doubt PCIe 5.0 is going to make much of a difference for gaming, or most other desktop usage scenarios, for that matter. At least for current games, any difference in load times is going to be imperceptible over a 4.0 or 3.0 NVMe drive, and even a SATA SSD will tend to perform rather similar. Maybe that will change as some games start to utilize SSD-focused asset loading, but I have doubts that a 5.0 drive would provide any significant gaming performance benefits over a 4.0 one anytime soon, as a game wouldn't sell if it performed poorly on the vast majority of modern gaming systems. And at least initially, I would expect 5.0 drives to carry a large price premium, where you would probably be better off spending the money on another drive with double the capacity instead, or putting it toward some other part of the system that will have a more direct impact on performance.

Also if the CPU runs at max temp @ stock already then where's the headroom for overclocking, I'm sick amd tired of Chip manufacturer pumping the wattage and temps to the absolute limit.
Overclocking CPUs is generally no longer all that practical. They tend to have so little overclocking headroom that any gains to be had will be imperceptible, and in many cases the processors will perform better in lightly-threaded workloads at stock settings. And that's a good thing. Power and temperature are now intelligently tracked across the processor using numerous sensors, and clock speeds and voltages are managed precisely to get the most out of the hardware while keeping everything within limits deemed safe by the manufacturer.

The manufacturer does not care about the consumers products' lifespan.

As a matter of fact, earlier upgrades/repairs -> higher profits.

You genuinely can't put faith in companies doing the right thing, you just can't.

Just from an ewaste and logical perspective, running at lower than 85C will certainly not decrease the processors lifespan.

Again, take into consideration that other components will also run at much higher temps. It wouldn't be surprising to find out that VRM clocking at 115C which would be shy just 5C of the avg max. temp.

You do you people, but there's a good reason why people don't recommend approx. > 85C.
You rarely hear about CPUs failing under normal use. They are probably one of the most durable components in a system. And if there were reports of widespread processor failures, obviously that would look bad on the company and affect future sales, so it would be something they would want to avoid.

And any existing temperature limit recommendations are based on older hardware that generally didn't have such fine-tuned tracking and control of temperatures and voltages across the processor.

The number of people who need a faster CPU or a high end GPU has to be really limited by now.
Other than professionals that actually save time/money on a CPU/GPU intensive task, who is really in the market for a new PC at the moment?
There is a limited market for people who spend 5000$ a year on PCs so they can brag about running games with extreme RTX settings when the same games look great at high settings/low RTX.

This is going to affect AMD/Intel/NVidia profits.

Intel fans needing a significant bump in performance have had good options with Alder Lake and DDR4 boards for a year now.
Most people won't even notice the difference from 13th gen, even if it is theoretically much faster.

AMD fans have had cheap upgrades to Ryzen 3000 and Ryzen 5000 for three years now.
Most people won't notice the difference between a Ryzen 3600 and a 7950.
There were significant increases from Ryzen 1xxx and 2xxx and 3xxx/5xxx in real world situations, but anyone needing an upgrade probably jumped on 5600/5700 or sales on 5800x

Anyone who needed a GPU upgrade for 1080p or 1440p gaming has had options for 6 months now
With FSR2/DLSS2, mid range cards have been good enough for 4k gaming at high settings
Anyone who had an AMD 580/Nvidia 1060/1070 and felt a need to upgrade probably upgraded once 6600/6700/3060TI/3070 became available at close to MSRP.
While I generally agree, the bad graphics card pricing since the beginning of last year has likely delayed many system builds. And while GPU prices have been "better" in recent months, they still aren't exactly "good", especially considering these cards will be replaced by newer, faster models relatively soon. You still can't find a 3060 Ti for the card's intended $400 MSRP. Most 3060 (Non-Ti) cards are priced at that level or higher, despite being around 20-25% slower. Likewise, the 3050 Ti is priced roughly where the 3060 was intended to be, despite it being around 30% slower than that card. The pricing of these cards is still kind of bad. And anything around the $200 range or below is no faster than cards that were the same price or less several years ago. There isn't much incentive to upgrade until people can get more performance within a price range compared to what they already have. Retailers and manufacturers are reluctant to drop prices of the hardware more than they have to following last year's crypto shortage, but are likely to give in eventually, and I think there are a decent number of people waiting on either additional price drops or the next-generation of hardware before building or upgrading a system.
 
  • Like
Reactions: Phaaze88
I have to admit that I am not 100% sure about VRM but from what I read when looking into OCing, some boards have good ones, some cheaper ones have bad ones with few phases.

The higher the temps & therefore more wattage especially prob. when increasing volts while OC? the higher the temp of the VRM but you'd have to research that yourself to get a 100% factually correct answer.
Additionally the temp reading are not always 100% correct. 100C VRM temp reading is actually in reality much higher when measuring directly on the component
As TerryLaze has said you can't predict VRM temp based on CPU temp. Likewise you can't predict CPU temp based on watts. A 5w raspberry pi can hit 100C in an enclosure with no airflow, and a 10k+ watts Cerberus WSE2 might run at 60C with water cooling.
 
  • Like
Reactions: TJ Hooker
While they are voltage regulators they do have to supply that voltage to the CPU with the power the CPU asks for, 1.5Vcore at 100W is much less strain than 1.5V at 230W , the first would do ok with 70 amps VRMs while the second would need to be able to handle 150 amps.
But 1.5Vcore at 230W at 100° is just the same as 1.5V at 230W at 70° if the VRMs are made for 100° that is, temps will not affect any other rating, outside of highly theoretical.

Ehh don't use volts and watts in the same comparison as watts are derived from volts. Joules are the unit of energy usage, while a watt is energy usage over time, one joule per second. You can get watts by multiplying Volts by Amps.

This kinda breaks out the relationship between everything.

http://www.spazztech.net/ohm-s-and-watt-s-laws.html

https://www.rapidtables.com/calc/electric/watt-volt-amp-calculator.html

As for transistor temperatures, running over 85C for prolonged periods of time will definitely shorten lifespans unless the circuit is specifically hardened for extreme environments, which these desktop products aren't. All the newer CPU's attempt to overclock themselves in order for the manufacturer to "win" performance crowns and extract maximum revenue from the consumer. There is nothing bad about this, just understand it going into the purchase.

For VRM's, the more power that is drawn the hotter they are going to get, temperature in all IC's is a direct result of the natural ohmic resistance causing electrical energy to be converted into thermal energy. If CPU is "using" 125W of power, then it's actually just converting 125W of electrical energy into 125W of thermal energy. This energy is linear with clock speed and active transistor count but quadratic with voltage. Resistance also goes up with temperature causing the per-transistor leakage to also go up. Put all that together and "high" performance CPU / GPU's are hot because they are trying to push the clockspeed as high as possible with large transistor counts, then sticking a price tag on that clock speed.

Cut the clock back by 10~15% would have a massive effect in that it'll lower voltage and resistance simultaneously.
 
The manufacturer does not care about the consumers products' lifespan.

As a matter of fact, earlier upgrades/repairs -> higher profits.

You genuinely can't put faith in companies doing the right thing, you just can't.

Just from an ewaste and logical perspective, running at lower than 85C will certainly not decrease the processors lifespan.

Again, take into consideration that other components will also run at much higher temps. It wouldn't be surprising to find out that VRM clocking at 115C which would be shy just 5C of the avg max. temp.

You do you people, but there's a good reason why people don't recommend approx. > 85C.
I disagree. At the least, from the most cynical standpoint, you can count on manufacturers to make sure an acceptable number of their product will survive through the warranty period. But I believe more than that. There is also a reputation to maintain, and there are plenty that are proud to produce the best possible product without shortcuts.

Some of Intel's problems can be traced back to their decision to dope cobalt because pure copper traces weren't expected to hit their 10 year longevity goal. That decision increased power, added delays, and hurt their business (in the short-term) because they weren't willing to compromise. This is despite the fact they don't offer a 10 year warranty on any of their CPUs.
 
  • Like
Reactions: TJ Hooker
I disagree. At the least, from the most cynical standpoint, you can count on manufacturers to make sure an acceptable number of their product will survive through the warranty period. But I believe more than that. There is also a reputation to maintain, and there are plenty that are proud to produce the best possible product without shortcuts.

Some of Intel's problems can be traced back to their decision to dope cobalt because pure copper traces weren't expected to hit their 10 year longevity goal. That decision increased power, added delays, and hurt their business (in the short-term) because they weren't willing to compromise. This is despite the fact they don't offer a 10 year warranty on any of their CPUs.
intel has several cut off points backed into the CPU that nobody can change, there is a throttle temp and a shut down temp, now back in the day it was pretty common for PCs to shut off when they would get too hot but lately it almost never happens because the CPU will start to throttle well before the danger point, if 95 or 100° is the throttle point then you can be sure that the CPU will survive that for many years because it will nearly never reach the shut down point.
Page 76 onward if anybody is interested.
(volume 1)
https://www.intel.com/content/www/us/en/products/docs/processors/core/core-technical-resources.html

More or less the same should go for AMDs latest CPUs.

Some people still live in the past where a Tjmax of 65° was common... (core2)
 
  • Like
Reactions: jp7189