News The 12VO power standard appears to be gaining steam — new standard will reduce PC cabling and costs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
That's why we need a common specification & agreements between MoBo Vendors & Case Manufacturers =D
My point is that the extra space required for the right-angle edge connectors you like will make cases bigger. Enshrining that in a standard will make all cases bigger, heavier, and more expensive. I don't consider that a good thing.

I know, but the flexiblity will be worth it once it's developed & deployed to the masses.
Even leaving development costs aside, the added product cost is another issue. Anyway, whether or not you agree with me, it's not going to happen. I'm just trying to explain why. If you like being disappointed, you're free to keep hoping for it.

IMO, the inability of U.3 hosts to support U.2 drives is the fatal flaw, as if the other stuff weren't bad enough. At the very least, you'd need several years of U.3 drives dominating the SSD market, and I see only a couple of drive makers that switched. I doubt the others will come along, now that the E1 and E3 standards have arrived and seem to be dominating the market.

Imagine not having to worry about SATA/SAS/nVME compatibility and making SAS become main-stream for HDD's
They seem headed in the direction of using NVMe, from what I can see.

and Hybrid HDD's that have a nice reasonable size chunk of Optane in it along with Multi-Actuator HDD's to help make use of that linear R/W speeds. That can be such a beautiful world for us, the end users.
  • Optane is dead and it's not coming back.
  • Hybrid HDDs only made sense for consumers, because datacenter does their own tiered storage via software. However, consumers no longer want HDDs for anything other than NAS/backup, where hybrid provides negligible value.
  • Multi-actuator drives will be too expensive and specialized for consumers. They actually appear to software as 2 independent drives, BTW.

Given U.2 didn't catch on, we don't have to worry about back compat with alot of U.2 drives then.
It caught on in the datacenter, just not for consumers.
 
Last edited:
On a tangent, I've been curious as to why they haven't used opportunities like this to increase the amount of power available via PCIe ports to do away with extra cables to cards that draw more than 75 watts. Signal interference? Reduce costs (less pins)?
It makes no sense to force all motherboards to be designed with the worst-of-worst cases with 600W PCIe slots in mind when 80+% of motherboards may never have any expansion cards at all. Keeping bulk power separate from the motherboard allows you to use just about anything with just about anything else using as-necessary wiring to cover the difference.

It would be quite annoying if the motherboard you picked dictated the GPUs you could use.

If a future new PCIe slot spec raises the bar on slot power, 150W is likely as high as it is going to go: enough to power two U.3 slots, moderate power accelerator cards and entry-level GPUs.
 
  • Like
Reactions: bit_user
On a tangent, I've been curious as to why they haven't used opportunities like this to increase the amount of power available via PCIe ports to do away with extra cables to cards that draw more than 75 watts. Signal interference? Reduce costs (less pins)?
Servers and workstations. Every slot has to support 75W so on something high end right now that's potentially 525W which requires additional PCIe power connectors on the motherboard. Even just doubling the power to the slots in this case would be a mess as you'd need to go from an 8 pin + 6 pin to 5x 8 pin connectors and require a much higher end power supply in the process.
 
It's still early for me this morning. Could someone point out specifically where the cable reductions are coming from? So, a 24-pin ATX is replaced with a 10-pin. There will still be those needed for SATA drives, and you'll need USB cables to connect to your case. Your fans will still need to be connected to something.

I can see that video cards might lose that annoying cable. But if that is the only "reduction in cables" I'm underwhelmed.
 
Happy holidays to you too!


Simple: inertia. By sticking with 12V, they avoid the additional friction of needing to refresh just about everything else for a voltage only the newest of new stuff supports. 12VO split the baby in half.


You can efficiently step 24V down to 1V using simple buck regulators but in a buck regulator, the high-side FETs need to be able to pass the full load current with a 4-5% duty cycle divided across phases. If a buck regulator outputs 50A per phase, you need high-side FETs able to pass 50A and those will be almost twice as big at 48V, which also means about twice as expensive to manufacture and twice as hard to drive. This isn't practical beyond 24V. If you want to efficiently step 48V directly down to 1V, you have to use voltage transformers instead of chokes, which significantly increases costs and complexity.

The most efficient 48V point-of-load regulators are built as PCB transformers: the windings are integrated as extra layers into the PCB and then ferrite cores are glued on both sides to complete the transformer. If you think PSUs and motherboards are expensive, wait until your PCBs get a dozen extra layers to do this!

That is why we only see 48V in telecomms and datacenter stuff where cost doesn't matter much.


In tech school, one of my teachers in AV classes where we worked on open-frame CRTs told us that we are far more likely to injure ourselves by over-reacting to shocks than the shocks themselves. He also told us to keep our left hand behind our back when poking around something with a high probability of shock, which is practically the same thing as your dad.
My father taught me never wear jewerly, especially rings.
 
  • Like
Reactions: bit_user
It's still early for me this morning. Could someone point out specifically where the cable reductions are coming from? So, a 24-pin ATX is replaced with a 10-pin. There will still be those needed for SATA drives, and you'll need USB cables to connect to your case.
I guess the idea behind SATA cables is that PSUs usually include more than needed. In the new standard, SATA power will arrive via the motherboard. So, you only need to insert one SATA power cable for each SATA drive you actually have installed in your system. And if no SATA drives, then no SATA power cables.

People with modular PSUs kind of already have this, but modular PSUs tend to have SATA power cables with multiple connectors on them and extra length that has to be tucked away somewhere, if you're not using that exact number of drives.

Your fans will still need to be connected to something.
Right, but most fans are now powered via motherboard fan headers. So, it's basically extending that model to virtually everything.

I can see that video cards might lose that annoying cable.
Actually, that will still be a "thing", for reasons explained in some of the above posts.
 
  • Like
Reactions: thestryker
I wonder if I'm the only one who doesn't like the idea of giving up SATA power connections.

I've got a 2.5in SATA SSD dedicated as my Windows swap drive, two Blu-ray drives, four hard drives, and a USB/SATA/ESATA/Molex/Card Reader hub currently using my SATA power connections.
You are not alone. I have 10 16TB HDD, 12 SSD, a 4K BluRay Rewritable, and a DVD Rewritable drives connected via a 24 port ASMedia SATA III card, a 4 port ASMedia SATA III card, and my motherboard. That is 26 devices and not counting those on docking stations.

2 SATA 3 power connections allowed by the motherboard and no power connections on the power supply.

i know most people now want all their files on someone else's system, but I do not. Nor can I afford such to pay for someone to backup my data (currently at 170TB and NONE is porn).

This is like all the motherboard manufacturers reducing the number of expansion card slots. I had to move to a high end server motherboard because I do not use on-motherboard sound or on-motherboard video and I do use tv-tuner cards, a streaming capture card, and 2 IO controller cards for a total of 6 PCIe expansion slots needed.

These motherboards and these PSUs are absolutely useless as far as I am concerned.
 
A 12V DC mainboard could be a real advantage where utility mains (120/240V AC) isn't (reliably) available or if you wished to run it off a UPS with hefty 12V DC outlets.
I could imagine a PSU with both a 120/240 V AC and 12-14 V DC(say) inputs which output nice clean 12V DC. You could then plug in a car battery if pushed.
 
  • Like
Reactions: bit_user
SlimSAS can run PCIe 4.0 and SATA out of the same connector (either or, not both of course). I've not seen any on anything other than workstation boards and those don't tend to support SAS so I'm not sure if it can do all 3 on one.
SFF-8654 Slim-SAS is made by the same technical committee that did Mini-SAS HD.

The main difference is the shape of the connector and the thinner 0.6mm pitch contacts in use.

I think the bigger issue is the supply chain side.

Since Enterprise wants to move over to Slim-SAS for it's 4x Lane & 8x Lane connectors.

The tooling / automated manufacturing lines for Mini-SAS HD should be at lower capacity, that means Consumers can take advantage of Enterprise developing that manufacturing line and mass produce even more Mini-SAS HD connectors for the consumer side to use as their mainline connector.

Despite the device end using same connector interface on U.2/U.3/U.4.

The Host end could be different between Enterprise & Consumer.

Thus not running afoul of each other's supply chain for host interface connections.
 
Last edited:
Status
Not open for further replies.