News Vendors Prep ATX12VO Motherboards Ahead of Intel 12th Gen Alder Lake Release

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
This might make sense if you're using all LapTop level parts or designed an entire platform around it.

But most people I know have alot more parts in it and moving around the 5V & 3.3V converters to the MoBo makes zero sense.

And my idle is ~ 216W, so ATX 12VO wouldn't do squat for me

ffbdf68d9eaef356f18c5fd9a9d106137900f1cb2e7af0883dced996cad803c1.png


+ PSU Power Rails:
+12.0V = MainBoard / CPU / GPU / VRM / Fans / PCIe / Molex Power Plug / SATA Power Plug
+ 5.0V = HDD / ODD / SATA SSD's / VRM / RGB / USB / Molex Power Plug / SATA Power Plug
+ 3.3V = RAM / Various Logic Chips / M.2 / _ _ _ / PCIe / _ _ _ _ _ _ _ _ _ _ _ _ _ / SATA Power Plug

Moving the 5.0V & 3.3V to the MoBo doesn't solve that many issues, and for most average users who has a random assortment of hardware in their PC, including older stuff, this won't really help since the idle is usually way past 100W.

80 Plus Titanium already calls for 90% Efficiency @ 10% load
80 Plus can always create new Tiers above Titanium that have 90% or better efficiency @ the 10% load factors.

Also you're creating more complications for MoBo makers by having them perform the 5.0V & 3.3V DC to DC power conversion.

BTW these are the "Idle System Power Draw" for test rigs for CPU Benchmarks.
Usually these Test Rig Systems have the bare minimum of CPU + Mobo + GPU + SSD + PSU.

XLbNRgO.png


McXPpsP.png


qFv97H6.jpg


hDklGzL.png


The vast majority of ATX 12VO's Efficiency gains over regular 80+ spec is @ "< 50 watts" system load.

These "Test Rigs" blow right past that "50 watt" line in the vast majority of the cases when idling.

This doesn't even factor in regular folks who load up their PC's with extra drives of all sorts like me.

IOwlfNZ.png


sYvSFRS.png


Most of the Sub 80% Conversion InEfficiencies occur at < 50 watts, even across various Power Supply Sizes & Ranges from quality PSU makers like SeaSonic.

But 80 Plus got it right since look at the vast majority of bare minimum Test Rigs that blow past the 50 Watt System Idle marks.

This doesn't even factor in all the regular pieces that people shove in their PC case.

TL : DR;

ATX 12VO is a niche solution for Ultra Small Form Factor PC's and LapTop Parts needing a common PSU standard or Low Core & Low Frequency Parts that go into Tiny Form Factor PC's.

You look at "Dell" and their proprietary BS that they like to pull while using a ATX 12VO-like only type PSU that doesn't conform to any spec.

View: https://www.youtube.com/watch?v=4DMg6hUudHE


Then there is a need to force Dell into some form of spec compliance, but their proprietary BS is what causes more eWaste.

ATX 12VO isn't a real replacement for ATX, or for DIY or regular MoBo usage.

Even down to Mini-ITX, the ATX PSU standard is still relevant and useful.

Remember folks, 60 watts used to be the minimum power requirement for traditional tungsten filament based Light Bulbs.

Now we use LED Light Buls, but with greater computing, we still consume more power than 50 watts while idling, trying to gain efficiencies at that low end only applies to a niche group of tiny form factor computing.

I don't see that changing in the future, as we demand more computing power in the future along with more cores and/or more frequency, that System Idle is only going to either go up, or stay at where they're at currently.
 
Last edited:
  • Like
Reactions: ginthegit

TJ Hooker

Titan
Ambassador
This might make sense if you're using all LapTop level parts or designed an entire platform around it.

But most people I know have alot more parts in it and moving around the 5V & 3.3V converters to the MoBo makes zero sense.
What 'lots' of parts do you think people are running off 5V/3.3V? And wouldn't most of them draw no more than a few watts each?

Edit: "[...]for most average users who has a random assortment of hardware in their PC, including older stuff, this won't really help since the idle is usually way past 100W."
No, it's not. Just because you have high idle power (from running a ton of drives and not letting them idle, from what I remember you saying in the other thread), doesn't mean that's remotely typical.
/Edit

+ 5.0V = HDD / ODD / SATA SSD's /SATA Power Plug
These are all the same thing as SATA power plug...

+ 3.3V = RAM / Various Logic Chips / M.2 / _ _ _ / PCIe / _ _ _ _ _ _ _ _ _ _ _ _ _ / SATA Power Plug
RAM is primarily powered off the 12V rail via a DC-DC converter. 3.3V is part of the SATA power spec, but many drives don't actually use it.

Edit: All your charts for idle system draw are measured at the wall, meaning they include power supply in(efficiency). And you're comparing it to DC watts, i.e. not taking into account PSU efficiency. Also, all of those idle power draw results are for systems with high end GPUs (e.g. 2080 Ti), and many of them 10+ core CPUs, neither of which are anywhere near the norm.
 
Last edited:

InvalidError

Titan
Moderator
Moving the 5.0V & 3.3V to the MoBo doesn't solve that many issues, and for most average users who has a random assortment of hardware in their PC, including older stuff, this won't really help since the idle is usually way past 100W.
Average way past 100W? According to my UPS, my PC + monitor + USB3 hub and all of the USB devices plugged into it use 72W and my PC has two HDDs, three SSDs, one DVD-RW, an i5-3470, 32GB of DDR3-1600 and a GTX1050. And my PC isn't even idle, the CPU is at 20-30% load from all the stuff I have open in the background.

It takes quite a bit to break 100W idle unless your CPU and GPU are overclocked... or you have an insane amount of RGB fans running for nothing.
 

InvalidError

Titan
Moderator
BTW, the reason why DC-DC converters on motherboards are more efficient than one large DC-DC converter inside the PSU is that on top of eliminating reducing wiring losses over long distances, it also allows the DC-DC converters to be sized for the specific load connected to them instead of being sized for the most total system current that might possibly get drawn.

It also opens up the path for making every future interface standard 12VO to bypass intermediate conversions altogether since most of what few devices still use 3.3V and 5V have another layer of on-board DC-DC converters to drop those voltages even lower. Once we get there, 12VO will help make things slightly cheaper across the board.

It is a long-term play.
 
Last edited:
  • Like
Reactions: TJ Hooker
And my idle is ~ 216W, so ATX 12VO wouldn't do squat for me
Jeebus, do you have your parts on max clock speed all the time with a miner or something?

Moving the 5.0V & 3.3V to the MoBo doesn't solve that many issues, and for most average users who has a random assortment of hardware in their PC, including older stuff, this won't really help since the idle is usually way past 100W.
If you're considering the "average user" to be you, please stop there. You are most certainly not the average user.
 
  • Like
Reactions: TJ Hooker

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Average way past 100W? According to my UPS, my PC + monitor + USB3 hub and all of the USB devices plugged into it use 72W and my PC has two HDDs, three SSDs, one DVD-RW, an i5-3470, 32GB of DDR3-1600 and a GTX1050. And my PC isn't even idle, the CPU is at 20-30% load from all the stuff I have open in the background.

It takes quite a bit to break 100W idle unless your CPU and GPU are overclocked... or you have an insane amount of RGB fans running for nothing.
That's still past the 50 W Thresh Hold where ATX 12VO would make the most difference.

Look at the PSU Efficiency curves that I posted relative to wattage consumed.

Jeebus, do you have your parts on max clock speed all the time with a miner or something?
No, I leave everything at stock.

If you're considering the "average user" to be you, please stop there. You are most certainly not the average user.
You should see how many other drives my friends have in their towers.
 

InvalidError

Titan
Moderator
That's still past the 50 W Thresh Hold where ATX 12VO would make the most difference.

Look at the PSU Efficiency curves that I posted relative to wattage consumed.
My 72W includes my main monitor which is around 25W including USB2 devices connected to its built-in hub and an externally powered USB3 hub with its attached devices, which leaves 40-45W for the PC's PSU on the INPUT side. Add conversion losses at less than 10% load on my 650W Silverstone Strider (don't remember if it is Platinum or Titanium), that leaves 35-40W total on the PSU's outputs, not 50+W.

As I have written before, something like 12VO was inevitable since it makes no sense to maintain 3.3V and 5V rails when no meaningful loads use those anymore and most of the remaining ones have their own on-board converters to convert that to something else that could easily be adapted to a 12VO interface make-over to eliminate the need for intermediate DC-DC converters on the motherboard and PSU.

Don't get your panties in a bunch over this, the transition will likely take about 10 years to complete, you still have plenty of time left to continue drawing 1.21GW from 3.3V and 5V.
 
  • Like
Reactions: TJ Hooker

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
My 72W includes my main monitor which is around 25W including USB2 devices connected to its built-in hub and an externally powered USB3 hub with its attached devices, which leaves 40-45W for the PC's PSU on the INPUT side. Add conversion losses at less than 10% load on my 650W Silverstone Strider (don't remember if it is Platinum or Titanium), that leaves 35-40W total on the PSU's outputs, not 50+W.

As I have written before, something like 12VO was inevitable since it makes no sense to maintain 3.3V and 5V rails when no meaningful loads use those anymore and most of the remaining ones have their own on-board converters to convert that to something else that could easily be adapted to a 12VO interface make-over to eliminate the need for intermediate DC-DC converters on the motherboard and PSU.

Don't get your panties in a bunch over this, the transition will likely take about 10 years to complete, you still have plenty of time left to continue drawing 1.21GW from 3.3V and 5V.
Or the community can fight you tooth and nail and make sure ATX 12VO fails like Intel's Itanium and BTX standards

=D

Try to force the industry to use more 3.3V & 5V and make PSU makers create as efficient as possible with the current ATX designs, even at the lower wattages that are sub 50 watts.
 
  • Like
Reactions: ginthegit

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
Or the community can fight you tooth and nail and make sure ATX 12VO fails like Intel's Itanium and BTX standards

=D

Try to force the industry to use more 3.3V & 5V and make PSU makers create as efficient as possible with the current ATX designs, even at the lower wattages that are sub 50 watts.

What these numpties dont realise is that we are fighting over Idle power here. Who is using idle power? If a PC is to be productive, then Idle power is for the other numpties who never turn their PC's off.

Fact is, and I believe you are correct. There are already 90% efficiency powersupplies that run on low load, and it does not make sense that the Environmental agencies are enforcing on OEMs to use powersupplies (that claim at best) and extra 3% efficiency. It looks like Intel has been doing their Bribe and lobbie again.

But since Intels poor gen 11 showing, they had to come out with something to make it look like OEMs could mitigate their losses selling new Intel chips to the numb skulls that dont need it.

And we have yet to see how the AI chips will affect idle load when the programs for it get more demanding.

Problem is, most of the commenters above are just Intel nuts who buy into the Claimed rhetoric. I always go to the forums to see what people who have 12vo's. It seems like they cost a fortune, fail more often, and you need a fairly specific one for the motherboard.

The change to VO;s make little difference and make no sense.

The cost will escalate for the Motherboard, so you will liternally pay more in the long run.

DC to DC was meant for Low power, it will be comically interesting when they start to develop high power systems, which i doubt they will.

I hope that the Enthusiasts make Intel pay for this pathetic move, and change to AMD for one generation as a show of solidarity against Intels bad Lobby tactics and pointless tech changes.
 
I can see this for office / point of sale/ display/ signage computers etc....
Worked on a few already.
The problem is you still have a data connector to the SATA devices, but now you have a power wire from the motherboard also. Not easy cable management on custom builds.
But small OEM power supply systems could become a lot smaller.
I still have a 450w 12v only power supply with only PCIE cables. Fits in a 5 1/4 drive bay and has 2 40mm cooling fans and 92%-95% efficiency. So smaller form factor power supplies would be possible.
Would I like all computer to switch over in the next couple years. NO.
But it will most likely happen, but enthusiast power supplies will probably still have PCIE ,SATA cables , just 12v only ones.
 
  • Like
Reactions: ginthegit

InvalidError

Titan
Moderator
The problem is you still have a data connector to the SATA devices, but now you have a power wire from the motherboard also. Not easy cable management on custom builds.
Having separate wires coming from the PSU to each SATA device and data cables from the motherboard is actually worse power management than having both coming from the motherboard. Also, the SATA connector was designed to make combined power+data cables possible, so motherboards could simply opt to use combined SATA connectors and cables, then you have only one.

SATA is overdue for an upgrade and I can easily imagine it getting supplanted by a PCIe 3.0/4.0x1/12V cable to which you can simply add a dongle to provide DC-DC conversions and either PCIe-to-SATA3 bridge or data pass-through to a host port with SATA3 fall-back.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Having separate wires coming from the PSU to each SATA device and data cables from the motherboard is actually worse power management than having both coming from the motherboard. Also, the SATA connector was designed to make combined power+data cables possible, so motherboards could simply opt to use combined SATA connectors and cables, then you have only one.

SATA is overdue for an upgrade and I can easily imagine it getting supplanted by a PCIe 3.0/4.0x1/12V cable to which you can simply add a dongle to provide DC-DC conversions and either PCIe-to-SATA3 bridge or data pass-through to a host port with SATA3 fall-back.
If you're that worried about Power InEfficiency losses from 3.3V & 5.0V.

I'm sure we can ask SeaSonic to make a custom version of their CONNECT PSU.

https://seasonic.com/connect

And have the Connector do all the DC to DC conversion locally on the Connector for 3.3V & 5.0V using super short Molex/SATA cabling to reduce 3.3V & 5.0V inefficiency losses.

Leave the Classic 24-Pin, 4/8 Pin EPS, and 6/8 Pin PCIe power connections to come straight from the PSU base.

You're so worried about the "inefficiencies" that you want direct DC to DC conversion, well, you can have SeaSonic do it right there with shorter cable runs.
 

InvalidError

Titan
Moderator
If you're that worried about Power InEfficiency losses from 3.3V & 5.0V.
I'm not worried about the inefficiencies. As I have written previously, those inefficiencies will be eliminated altogether when all of the major interfaces get their 12VO refreshes over the next 10 years. The best efficiency comes from eliminating legacy rails and the legacy devices that need them altogether.

PCIe will eventually become unable to push higher speeds on the nearly 20 years old connector and whatever connector replaces it will almost certainly be 12VO. That would be a great time to increase the PCIe slot power limit to 150-200W and eliminate the need for PCIe AUX cables in many mid-range PCs. The EPS12/ATX12V connectors could also be eliminated by adding pins to the 12VO connector and I am actually surprised Intel didn't do more of that with the initial 12VO spec, at least put enough pins in there to eliminate the EPS12 connector.

As for Seasonic's Connect, that looks like it may already be basically a 12VO PSU since the break-out box looks like it has at least two DC-DC converters in it. In the later part of the 12VO transition, I imagine it will be common for people who still need legacy power connectors to use a PCIe-AUX cable to power a similar break-out box with DC-DC converters.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
I'm not worried about the inefficiencies. As I have written previously, those inefficiencies will be eliminated altogether when all of the major interfaces get their 12VO refreshes over the next 10 years. The best efficiency comes from eliminating legacy rails and the legacy devices that need them altogether.
Ok, we all know what we need to do, push 3.3V & 5.0V instead of 12V =D

PCIe will eventually become unable to push higher speeds on the nearly 20 years old connector and whatever connector replaces it will almost certainly be 12VO. That would be a great time to increase the PCIe slot power limit to 150-200W and eliminate the need for PCIe AUX cables in many mid-range PCs. The EPS12/ATX12V connectors could also be eliminated by adding pins to the 12VO connector and I am actually surprised Intel didn't do more of that with the initial 12VO spec, at least put enough pins in there to eliminate the EPS12 connector.
Gotta make sure we don't break backwards compatibility all the way to PCIe 1.0 and keep PCIe Slot power the same!

If you add more pins to the 12VO connector, then you just broke your own 12VO spec with a incompatible definition for the 12VO Power Cable. That creates 3 different MoBo Main Power Plugs.

standards.png


We don't need another competing standard for PSU MoBo Main Power plug.


As for Seasonic's Connect, that looks like it may already be basically a 12VO PSU since the break-out box looks like it has at least two DC-DC converters in it. In the later part of the 12VO transition, I imagine it will be common for people who still need legacy power connectors to use a PCIe-AUX cable to power a similar break-out box with DC-DC converters.

As long as you don't break the existing ATX standard, you can move your DC-DC converter to the break out box for all I care.

I want my 3.3 V & 5.0 V rails, if that means having that classic 24-pin, then I'm happy.

The Seasonic Connect can move the DC to DC conversion of the 3.3V & 5.0V for Molex/SATA power connectors to the Connect Break Out Box for all I care.
 

InvalidError

Titan
Moderator
If you add more pins to the 12VO connector, then you just broke your own 12VO spec with a incompatible definition for the 12VO Power Cable. That creates 3 different MoBo Main Power Plugs.
The ATX plug had only 20 pins initially and then got the +4 add-on when it turned out that the first 20 pins weren't quite enough to handle all of the current going through to all of the different stuff that needed it at the time, same thing happened to ATX12V which caused it to gain four pins and become EPS12. Extending 12VO by a couple of positions wouldn't change a thing aside from being bigger. The extension could even be keyed the same as EPS12, effectively making EPS12 the "+8" to a 12VO + 8 connector. Nothing fundamentally new needed.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
https://www.anandtech.com/show/13609/pcisig-warns-of-incompatibilities-between-m2-and-samsungs-ngsff

PCI-SIG, the standards committee behind PCI Express and related standards, has issued a warning about incompatibilities between their M.2 standard and Samsung's NGSFF/NF1 SSD form factor. The notice from PCI-SIG does not refer to Samsung by name, but does indirectly call them out for basing a new form factor on a mechanically identical M.2 connector without introducing a new keying option to prevent improper insertion of M.2 drives into NGSFF slots or vice versa.


Samsung's NGSFF form factor was unveiled in 2017 as a proposed replacement for M.2 and U.2 SSDs in datacenter applications. The goals are similar to the competing EDSFF standards derived from Intel's Ruler: provide more power than can be delivered over M.2's 3.3V supply, allow hot-swapping of cards, and widen the cards beyond the typical 22mm to allow for two rows of NAND flash packages side by side. Samsung's standard re-uses almost all of the data and ground pin assignments from M.2, but removes the 3.V supply and adds 12V power elsewhere.


Samsung was initially hoping to call their standard M.3, but from what we've heard from unofficial sources this was going to make PCI-SIG very unhappy, so Samsung made a last-minute name change to NGSFF before going public. Not all of their partners got the message, and even Samsung's own exhibits did not quite have all traces of "M.3" thoroughly erased. Since then, Samsung has pursued standardization through JEDEC with a new name of NF1 for the form factor, but that effort seems to have stalled earlier this year without achieving ratification. However, even without standardization, Samsung has been moving forward with deployment of NGSFF SSDs and they have picked up several major partners along the way, including OEMs like Supermicro and AIC.


I'm glad Samsung's attempt at the M.3 standard failed, another hit against team 12VO.

M.2 on 3.3V FTW!

Now to make sure the RGB LED lighting industry moves all their power source away from 12 V and onto 5.0V only =D

=D
 
Last edited:

InvalidError

Titan
Moderator
However, even without standardization, Samsung has been moving forward with deployment of NGSFF SSDs and they have picked up several major partners along the way, including OEMs like Supermicro and AIC.

I'm glad Samsung's attempt at the M.3 standard failed, another hit against team 12VO.
Samsung's M.3 candidate is alive and well, merely renamed NGSFF/NF1 and "picked up several major partners" according to your own link. The Intel-based EDSFF standard uses 12V for power too.

In other words, server-space NVMe storage was already pushing for 12V to power larger high-performance SSDs up to 70W that cannot get the power they need from 3.3V M.2 slots regardless of 12VO. Rather than this being "another hit against 12VO" it looks more like validation that an official 12V/35-70W successor to M.2 is sorely needed to prevent the proliferation of vendor-specific standards.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Samsung's M.3 candidate is alive and well, merely renamed NGSFF/NF1 and "picked up several major partners" according to your own link. The Intel-based EDSFF standard uses 12V for power too.

In other words, server-space NVMe storage was already pushing for 12V to power larger high-performance SSDs up to 70W that cannot get the power they need from 3.3V M.2 slots regardless of 12VO. Rather than this being "another hit against 12VO" it looks more like validation that an official 12V/35-70W successor to M.2 is sorely needed to prevent the proliferation of vendor-specific standards.
But that's server / enterprise end.

I care more about consumer end.

I don't want your 12VO world, you can keep that to your Enterprise side with their Proprietary Enterprise standards.

I'll back U.3 that is backwards compatible

View: https://www.youtube.com/watch?v=uCXcqV31iZ8


https://www.storagereview.com/news/evolving-storage-with-sff-ta-1001-u-3-universal-drive-bays

U.3 is backwards compatible with SATA/SAS/NVMe.

That means taking account of 3.3V & 5.0V

That means, 3.3V & 5.0V isn't going to go away.

It's going to co-exist alongside 12.0V and not be kicked to the curb like the way you want.
 
  • Like
Reactions: ginthegit

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
The VRMs that feed the CPU and GPU cores are DC-DC converters with 12V input pushing 250+W at 0.9-1.4V at the higher-end, 350+W in servers.
350W is not higher end. Infact you are running the DC to DC converters near their max efficiency capability. Remember that just because the 12V DC is being clocked out to average at a lower voltage, doesn't mean that its own regulators are feeling the price of splitting that voltage in terms of Power throughput. It means that it will be making the Motherboard bigger and bigger to deal with the higher demand, which goes against the aims and goals of trying to make all components SOC. This is absurd.

You are clearly not thinking at this like an Engineer! Compromises always have to be made. Higher power, either means Driving the DC to DC converters through high power throughput (heat and lower efficiency) or having more of them in parrallel and thus taking up more space on the motherboard, and still requiring some passive or active coolong system.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
Having separate wires coming from the PSU to each SATA device and data cables from the motherboard is actually worse power management than having both coming from the motherboard. Also, the SATA connector was designed to make combined power+data cables possible, so motherboards could simply opt to use combined SATA connectors and cables, then you have only one.

SATA is overdue for an upgrade and I can easily imagine it getting supplanted by a PCIe 3.0/4.0x1/12V cable to which you can simply add a dongle to provide DC-DC conversions and either PCIe-to-SATA3 bridge or data pass-through to a host port with SATA3 fall-back.

No it isn't. Power management is already complex, and does not have to be hardware management. And stop making up your own concerns. If power management is already designed into the PSU with traditional Fuses and Regulators, why does even more, based on the motherboard, make sense!

12VO will fizzle out. and stop making excuses up!
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
They will get gradually phased out just like -12V and -5V on ATX and then you'll need adapter boards if you want to continue using 3.3V/5V devices.

And again Mr clarvoyant makes up his own future by a Typical "TOMSHARDWARE" Intel is the best attitude, and we will win against the Upstart AMD, by more obselete and pointless tech upgrades compared to AMDs generally innovative methods that kept it within 10% of performance of Intel, on worse Lithography aand lesser R&D.

Intel is a company that imploys illegal activities and pointless techs to get ahead... this is another example of that.

But I love your wishful thinking, it makes me laugh!
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
https://www.anandtech.com/show/13609/pcisig-warns-of-incompatibilities-between-m2-and-samsungs-ngsff

PCI-SIG, the standards committee behind PCI Express and related standards, has issued a warning about incompatibilities between their M.2 standard and Samsung's NGSFF/NF1 SSD form factor. The notice from PCI-SIG does not refer to Samsung by name, but does indirectly call them out for basing a new form factor on a mechanically identical M.2 connector without introducing a new keying option to prevent improper insertion of M.2 drives into NGSFF slots or vice versa.


Samsung's NGSFF form factor was unveiled in 2017 as a proposed replacement for M.2 and U.2 SSDs in datacenter applications. The goals are similar to the competing EDSFF standards derived from Intel's Ruler: provide more power than can be delivered over M.2's 3.3V supply, allow hot-swapping of cards, and widen the cards beyond the typical 22mm to allow for two rows of NAND flash packages side by side. Samsung's standard re-uses almost all of the data and ground pin assignments from M.2, but removes the 3.V supply and adds 12V power elsewhere.


Samsung was initially hoping to call their standard M.3, but from what we've heard from unofficial sources this was going to make PCI-SIG very unhappy, so Samsung made a last-minute name change to NGSFF before going public. Not all of their partners got the message, and even Samsung's own exhibits did not quite have all traces of "M.3" thoroughly erased. Since then, Samsung has pursued standardization through JEDEC with a new name of NF1 for the form factor, but that effort seems to have stalled earlier this year without achieving ratification. However, even without standardization, Samsung has been moving forward with deployment of NGSFF SSDs and they have picked up several major partners along the way, including OEMs like Supermicro and AIC.


I'm glad Samsung's attempt at the M.3 standard failed, another hit against team 12VO.

M.2 on 3.3V FTW!

Now to make sure the RGB LED lighting industry moves all their power source away from 12 V and onto 5.0V only =D

=D

And this is exactly my point. Intel trying to pay other campanies to adopt its standards. And it does it by making those other companies look like they are doing it independanty.

Intel is doing the dodgy again.... and thanks for showing this evidence of it!
 

TRENDING THREADS