News Power wire-less motherboards pump 1,500W over 50-pin connector — BTF3.0 standard envisions zero cables between the motherboard, GPU, and power supply

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
All this to appeal to the vanity of a handful of builders that struggle with proper cable routing? Moving the GPU power taps to the bottom-rear of the card is all you need to effectively be able to hide the wire and only leave the edges of connectors visible. Who would even consider beefing-up a motherboard's power handling capacity just for that?
 
  • Like
Reactions: snemarch
I have a Delta PFC1212DE-F00 120x38mm fan that can draw 4.8A (@12V) on start, but it's a finger-chopping screamer.
Per usb c cable you can run two of those....although why you ever would I have no idea. Most fans are nowhere near that nor should they be. Maybe for a server system but for a desktop that is just silly.
 
This is a step in the wrong direction. Yes, fewer wires is good, but we need the power supplies to be more laptop like. Have fewer higher voltages in the power supply itself and let the motherboard and graphics cards drop the voltages internally. IMO, the power supplies should be 24 or 48 volt. It's much easier to handle more volts than more amps. DC to DC conversion is so easy and efficient now having these overcomplicated power supplies creating so many different voltages with capacity that is likely to go unused is something from an era where integrating DC to DC conversion on the motherboard was hard and bad. Now, it's easy and cheap and everything does it all the time. It's actually better to leave the voltage high until right before you need the power, then convert it right at the end. One single high voltage power bus means less current, less loss, less noise, smaller traces, fewer traces, fewer wires and overall a better build. SFF computers have entirely switched to this simply by switching to external laptop style power supplies, and desktops should embrace and improve on it by creating a standardized internal version.
 
Per usb c cable you can run two of those....although why you ever would I have no idea. Most fans are nowhere near that nor should they be. Maybe for a server system but for a desktop that is just silly.
Air cooling is proportional to the density and mass of air being directed, increasing the velocity of the air (even without increasing the mass of air) increases the heat xfer coefficient. Go big or go home.
 
Hot swap server PSUs running at 1kW or more use similar connectors. This isn't a new idea and it works just fine in those cases. They are essentially using the same idea here. Assuming they are using similar quality connectors. The one picture there looks a lot like an ISA slot.

But server chassis are very proprietary as is something like this. I like the general idea of making the cabling more simple and easier to work with. But I do think this PSU idea is going to far into the proprietary and non-customizable PC configurations. If, like this, a motherboard put all of the power connectors on the edge of the front of the motherboard at 90 degrees, then cable management would be much easier already without having to require the PSU to be up front like they have it.
Enterprise can get away with "similar" connectors because they are indeed built with the tolerances and rigour you'd expect. I do not trust Asus and the people behind the BTF group/forum to design something as safe and proper as enterprise-grade hardware.

Regards.
 
  • Like
Reactions: snemarch
I think this article is mis-titled. This new formfactor is 'cable-less', not 'wire-less'. What is the difference? Wireless power brings up thoughts of power thu the air. This is not that.

How much 12v power does USB port provide = 0W <-- USB is a 5v interface. You cannot power the fans with USB ports.

Someone mentioned outdated JDEC specs <-- JDEC specs are used for DIMMs and nothing else in the PC, so JDEC is not part of this new proposal.
 
  • Like
Reactions: P.Amini
Cables have never been a problem let alone "the" problem. Efficiency takes that crown, or the lack there of! From total power consumption to the total thermal losses, that is where we need a new standard for any real improvement.

A cable-less system..... Woopti dooo!
 
I think this article is mis-titled. This new formfactor is 'cable-less', not 'wire-less'. What is the difference? Wireless power brings up thoughts of power thu the air. This is not that.

How much 12v power does USB port provide = 0W <-- USB is a 5v interface. You cannot power the fans with USB ports.

Someone mentioned outdated JDEC specs <-- JDEC specs are used for DIMMs and nothing else in the PC, so JDEC is not part of this new proposal.
Good point ...and since PSU is beside motherboard not on top or bottom of it, it is both top-less and bottom-less at the same time.
 
A wireless chassis fan is a nice idea but the EMF strength required to power it wouldn't be.
Aquarium pumps have been doing this for years. Electrical stuff outside the glass where is stays dry and impeller with magnets but no electrical components inside where it's wet. They transmit through 3/4" glass no problem.
 
How many amps of +12V power can a USB port provide? You have to remember that larger fans, when they spin up, draw much more power than when they're already spinning.
While SATA3 might not have the bandwidth over the latest iterations of USB, I believe the latency is superior.
5 amps, so 60 watts at 12v. Plenty of power for any fan or even spinning hard drive. I would expect any latency sensitive storage to be relatively small and easy to fit on m.2 cards. Today, sata seems to be used for bulk storage where latency doesn't matter as much.
 
In the datacenter, most chassis assemble like Legos. Power supplies, fans, drives, even CPUs/ram/accelerators (sometimes) are on carriers that plug in with no cables. One benefit is techs in the field can swap them perfectly with hardly any training. Darn near impossible to screw up.. even a little green light on each component to show when it's connected properly.

BUT.... this leaves no room for flexibility or creativity which would seem to be the opposite of what a true system builder would want. It's a great idea for large OEMs, but the whole point of DIY is because I want something just the way I want it.
 
What is great:
Things I like are the 50 pin power of 1500W. The elimination of extra cables to the video card and to the motherboard from the power supply is GREAT. The moving of all the various connectors for case fans, USB front panel, reset, power on, ... to the back of the motherboard is great. The video showing a gorilla trying to reach in and around cables and components to connect a fan cable describes a real situation especially for enthusiats that usually have far more in a case than the average computer user. I would suggest all connections on the back be in standardized to 2 rows (1.5 inches apart) at the top and the bottom of the motherboard (total of 4 rows) and none on the side to allow for the case plate the motherboard is mounted to standard, not having to be modified for each individual motherboard. This would allow for plenty of big hand room.

Comments:
To call this wireless is not accurate. The wired connections for the motherboard and power supply are hidden. This motherboard is intended for cases with a somewhat deep (at least 1 inch/2.54cm) area behind the mounting plate for the motherboard. Cables such as fan cables, power, ... are run in this area.

A nice idea for non-enthusiasts, but they do not do much with their systems other than run programs. It is severely lacking for true enthusiasts though. While most of the market is moving from anything outside of very basic equipment, enthusiasts still have a lot of hardware with their systems.

What I do not like:

I have looked at the motherboard on " View: https://x.com/unikoshardware/status/1874786284188918072?mx=2
". It has only 1 PCIe slot, that meant for the video card. The back panel is not visible but I will assume it has 1 or more USB/Thunderbolt ports for connectivity to external devices. The demonstratable expansion capabilities are less than my old Amiga 500. Assuming the motherboard has integrated audio and video, I recommend using that one PCIe slot for other purposes than a video card.

I am guessing the manufacturers expect one will run a cable with a USB/Thunderbolt hub at the other end for doing connectivity to external devices like front mount speakers/microphones, thumb drives, microSSD, ... , but does that not defeat the purpose of the "elegance" of no wires inside the case.

For the enthusiast market, they need to include wireless connections for high speed connections to multiple storage devices including NAS storage. I personally have 25 HDD/SSD directly connected to my system via an HBA, 2 optical drives, 2 NVMe drives on the motherboard, another 10 HHD via docking stations, 36 HHD via a NAS, and 6 drives in stack (need 2 more docking stations). I have 2 3d printers, an inkjet, a professional book scanner, VR setup tied to/through my computer, a major gaming rig for "Flight Simulator" (yoke with controls, peddles, and throttle), and a 3d handheld scanning device (that I cannot get to work right for making templates).

While my storage usage is rather extreme for most home systems, I know of other enthusiats with systems just as extreme in their own ways (one person I know has 17 3D printers. Every enthusiast is different based on our individual interests and financial means, but we all have much more than be done with just a single PCIe slot.
 
Aquarium pumps have been doing this for years. Electrical stuff outside the glass where is stays dry and impeller with magnets but no electrical components inside where it's wet. They transmit through 3/4" glass no problem.
How much current draw and at what voltage do they operate? Let's see some examples.
 
How much current draw and at what voltage do they operate? Let's see some examples.
Sure thing. Vortech was first to really push the tech (though many others have jumped onboard). Let's use the MP60 as an example since that's their most popular model over the years. It's tunable to use between 10 and 60 watts, runs on 12v and is kinda famous for being able to work for many hours off a car battery in the case of a power emergency. Don't let the price bother you as this is a niche product for a niche industry. There's nothing overly exotic in the construction.
 
Air cooling is proportional to the density and mass of air being directed, increasing the velocity of the air (even without increasing the mass of air) increases the heat xfer coefficient. Go big or go home.
Cool bro platitudes but that is a very inefficient ridiculously loud fan. It is way...WAY out of the norm. You are using something like 40 times the power to push 4 times the air. It probably does have a high pressure but all of that comes with issues as well. So if you want to cool a mining machine you go with something like that, if you want to cool a computer that won't sound like a hairdryer sitting next to you get a number of noctua fans.

Either way the use case you are proposing to over load power on USB c still wouldn't do it per line, and is so far from the conversation we are talking about I don't know why you are bringing it up.
 
You are using something like 40 times the power to push 4 times the air.
As you said this is off subject but a correction.

Unless there are major electrical or more likely mechanical energy efficiencies such as bad bearings, he would be using 16x power. Wind force is a function that squares with the surface area moving it or it is moving.