News Silicon Motion is developing a next-gen PCIe 6.0 SSD controller

This can only be aimed at the server market, given that's the only platform with PCIe 6.0 support on their roadmap. The weird part is that datacenter SSDs actually lagged consumer SSDs in embracing PCIe 5.0, by a couple years. At present, no sever platforms currently support PCIe 6.0.
 
The weird part is that datacenter SSDs actually lagged consumer SSDs in embracing PCIe 5.0, by a couple years.
This is just speculation based on what I know about the consumer market, but it could be based on manufacturing node. No client controllers until last year were manufactured on a 7nm node and I can't imagine an enterprise level PCIe 5.0 controller on anything worse.

All of the 30TB+ class drives I've seen have been PCIe 4.0 so while I know these drives don't need PCIe 5.0 perhaps there's something else at play as well.
 
the progress in performance not just in SSDs but in most parts of the PC echo system is seemingly pushing into overkill territory for general use. Will be interesting to see how mfrs balance the needs of the masses with the bleeding edge.

PCIe 4.0 is *plenty* for all but a small subset of scenarios. And given the likely cooling needs of 6.0 speeds, almost wonder if the current 'stick it anywhere on the mobo' might need to move to a more standard and cooler friendly location.
 
  • Like
Reactions: bit_user
Again, this is only for the server market, where it makes any sense.

In the consumer segment, it would be better for these gentlemen to make new controllers for the 3.0/4.0 bus using the "3nm" process technology with a consumption of no more than 4W at peak - this is what all laptop buyers are waiting for - 4-8 TB drives with a consumption of no more than 4W. Wherever you look, there is an abundance of SSDs with a consumption of 9-10W at peak, even the top Samsung lines are like that.

Where are the 2-4-8 TB models like the Hynix P31 Gold? I can't find them in retail for the 3.0/4.0 bus, and I am generally not interested for 5.0 in laptops, and in PCs too.
 
  • Like
Reactions: thestryker
what all laptop buyers are waiting for - 4-8 TB drives with a consumption of no more than 4W. Wherever you look, there is an abundance of SSDs with a consumption of 9-10W at peak, even the top Samsung lines are like that.

Where are the 2-4-8 TB models like the Hynix P31 Gold? I can't find them in retail for the 3.0/4.0 bus, and I am generally not interested for 5.0 in laptops, and in PCs too.
When I see PCIe 5.0 NVMe drives tested, it's always using a PCIe 5.0 host. Forcing them to run at PCIe 4.0 (which is the most that many laptop M.2 slots support, anyhow) should reduce their power consumption and heat dissipation. I'd be curious to know how efficient the latest generation of PCIe 5.0-capable controllers are at that speed.
 
make new controllers for the 3.0/4.0 bus using the "3nm" process technology with a consumption of no more than 4W at peak
That would be great for the 'laps' of people using laptops. I know I usually set my laptop on the table when I download a huge game from Steam. It's a 1TB Samsung drive so it gets hot during huge file transfers. Luckily not so while playing.
 
That would be great for the 'laps' of people using laptops. I know I usually set my laptop on the table when I download a huge game from Steam. It's a 1TB Samsung drive so it gets hot during huge file transfers. Luckily not so while playing.
I found this statement interesting. If you don't mind my being nosy, I'm curious to know the following:
  1. How fast are your steam downloads (approx. MB/s or gigabits/s, if you know; otherwise, general download speed)?
  2. Do you know which model SSD that is?
  3. Does your laptop have any airflow or other form of SSD cooling, in the spot where the SSD is located?

I ask because the network speed should be well below the sustained write speed of the SSD (probably), so this would only make much sense to me if the SSD has essentially no airflow or other form of direct cooling.
 
I ask because the network speed should be well below the sustained write speed of the SSD (probably), so this would only make much sense to me if the SSD has essentially no airflow or other form of direct cooling.
I can answer for him in general, without specifics. The problem of most laptops (architectural) is that there is practically no normal air draft through the M.2 slots and, what's more, the volume of the cases (height) physically does not allow the installation of effective passive radiators. All this together leads to catastrophic consequences when thoughtlessly installing high-performance series. And they are most often bought to replace low-consuming (and most often DRAMless) series by specially selected laptop manufacturers. Naturally, they are worse in performance and have problems with serious simultaneous read/write load without a DRAM buffer in which a copy of the translator is stored for quick search and updates of service tables. For laptops, except for rare "gaming" models, where they at least somehow thought out the place to install M.2 drives, energy-efficient and optimal in performance models with a DRAM buffer should be available.

By the way, with the "3nm" technical process, nothing prevents integrating 1-2 GB of RAM directly into the controller as part of the circuit. Which will only increase overall reliability and free up some space for NAND chips on the 2280 form factor. Dramless controllers have a built-in buffer, but a paltry 32-64MB. But they were made using outdated thick technological processes. And even more so with the transition to "2 nm", a 2-4GB buffer needs to be moved inside the controller chip. The HMB buffer in PC memory will never be a normal replacement, and besides, it is also very few for some reasons. The closer and faster the memory is to the controller, the more energy efficient it is, which means that critical operations are completed faster due to PLP circuits, which are also not integrated into consumer solutions, although it is in them that the reliability and stability of the power supply is in great question...

Currently, the market is completely absurd for laptops (but not desktops) - there are practically no competitors to the Hynix P31 Gold (the P41 Platinum is already too hot for the 4.0 bus). At the same time, the P31 is already morally obsolete at the controller level in terms of the process technology. And besides, it is extremely scarce and can rarely be bought anywhere (especially outside the USA) with a 5-year warranty, and buying any SSD without such a warranty is complete madness, given the probability of failure of a particular instance - 50/50.

We, ordinary consumers for laptops, simply have no choice. Especially in terms of single-sided models for 2-4-8TB.

Of course, you can switch such models as the 990Pro to energy-saving mode, but the big question is - will such a mode ensure the operation of the drive without guaranteed surges in consumption (current) above the limits for weak M.2 slots in ordinary office laptops and business series? I have my doubts.

It would be good if there were more single-sided models from 2TB and the ability to set a mode at the level of their NVRAM that clearly limits peak consumption in order to fit within the limits for a specific series of laptops.

5.0 models with DRAM buffer are unlikely to reach optimal consumption levels in laptops even with "2nm" process technology. And 6.0+ is probably only the 30s+ of this century, not earlier...
 
By the way, with the "3nm" technical process, nothing prevents integrating 1-2 GB of RAM directly into the controller as part of the circuit.
Can you point to other examples where logic has been mixed with DRAM, on the same die? I'm not aware of any. A quick search turned up this thread:

"You can make logic on a DRAM process, but either your logic will be slow or you'll have to increase the cost of your cost/power consumption of your DRAM cells."

https://news.ycombinator.com/item?id=14520163

Which will only increase overall reliability and free up some space for NAND chips on the 2280 form factor.
I wonder how big the DRAM really needs to be, since you could just use POP packaging to hide it under the controller. I'll bet some SSDs with DRAM already do this.

Dramless controllers have a built-in buffer, but a paltry 32-64MB.
Is that DRAM or SRAM, though?

The HMB buffer in PC memory will never be a normal replacement,
I'll bet that's mainly due to how high-latency PCIe tends to be. CXL is much more latency-optimized, though I'm not sure if it would make HBM that much better.

The closer and faster the memory is to the controller, the more energy efficient it is, which means that critical operations are completed faster due to PLP circuits, which are also not integrated into consumer solutions, although it is in them that the reliability and stability of the power supply is in great question...
Some Crucial and Intel SATA drives (higher-end consumer models) had power loss protection capacitors built-in. I own some.

I think you simply can't have a HMB (Host Memory Buffer) drive with power-loss protection, because the host CPU could go kaput and take the host memory buffer with it.

Currently, the market is completely absurd for laptops (but not desktops) - there are practically no competitors to the Hynix P31 Gold (the P41 Platinum is already too hot for the 4.0 bus). At the same time, the P31 is already morally obsolete at the controller level in terms of the process technology. And besides, it is extremely scarce and can rarely be bought anywhere (especially outside the USA) with a 5-year warranty,
FWIW, the P31 Gold is currently available on Amazon (seller: SKHynix_USA). The 2TB model is going for $150 but has a $25 coupon. I bought one for $94, back during the SSD price crash of 2023.

I was curious whether you're right about it being unrivaled on power consumption, so I had a look around. I'm not sure these are the best examples, but I found two PCIe 4.0 drives that achieve close to the same average power consumption and trounce it on efficiency (i.e. due to being faster drives, overall). I've summarized the results in a table. Each drive name is a link to the corresponding review, from where I got the data.

Model​
50 GB Folder Copy Avg Power (W)​
50 GB Folder Copy Max Power (W)​
50 GB Folder Copy Efficiency​
SK hynix P31 Gold (2TB)
2.28​
3.34​
446.0​
Samsung 990 Evo Plus (2TB)
2.94​
4.56​
524.3​
Silicon Power US75 (2TB)
3.04​
4.25​
646.9​

Although I listed the Max power, total heat output is going to be determined by the average power. On that front, they're not too far off. I think it's conceivable that they could at least equal the P31 Gold, if put in a PCIe 3.0 slot. I wonder if any laptop BIOS lets you force the slot into 3.0 mode.

I omitted the idle power stats, because I have it on good authority that the numbers with ASPM are all just a few mW and not worth worrying about. If you're concerned about laptop battery life, you'd enable ASPM (if the BIOS even gives you a choice - otherwise it'll be on by default).

and buying any SSD without such a warranty is complete madness, given the probability of failure of a particular instance - 50/50.
I wonder where you get this stat, because I've never seen a SSD fail. I think I heard about a couple failures at work, in desktop machines well over 5 years old, but that's still a 95% or better reliability rate at drives with 5 years of use or more.

Of course, you can switch such models as the 990Pro to energy-saving mode, but the big question is - will such a mode ensure the operation of the drive without guaranteed surges in consumption (current) above the limits for weak M.2 slots in ordinary office laptops and business series? I have my doubts.
The launch review of the 990 Pro tested low-power mode. It doesn't seem to perform much worse, but also barely saved any power. Perhaps the most notable difference is the idle power (with ASPM disabled). See for yourself:

It would be good if there were more single-sided models from 2TB and the ability to set a mode at the level of their NVRAM that clearly limits peak consumption in order to fit within the limits for a specific series of laptops.
True. They have temperature sensors, although I doubt the throttling thresholds are configurable and that doesn't necessarily help you save battery.
 
Last edited:
  1. How fast are your steam downloads (approx. MB/s or gigabits/s, if you know; otherwise, general download speed)?
  2. Do you know which model SSD that is?
  3. Does your laptop have any airflow or other form of SSD cooling, in the spot where the SSD is located?
1. I know it usually runs between 200-250Mbits during downloads.
2. It is a SAMSUNG 970 EVO SSD 1TB
3. No special airflow for the M.2 drive I did add one of those thermal pads to keep the heat flowing to the case.
 
  • Like
Reactions: bit_user
Some Crucial and Intel SATA drives
Examples of such series? I know that Crucial (without directly speaking about PLP chains) mentioned such power protection, for example, in the datasheet for MX500, which, unfortunately, recently became EOL (although the global demand for good series 2.5" SSD with DRAM buffer for use as a system disk in old, but still decently working hardware, is still huge), but in reality, testers did not find such chains on the boards there.
Is that DRAM or SRAM, though?
I don't have such data, but obviously, it's not SRAM, but DRAM, judging by the datasheets - for penny SRAM controllers in such a volume is an unaffordable luxury.

FWIW, the P31 Gold is currently available on Amazon (seller: SKHynix_USA). The 2TB model is going for $150 but has a $25 coupon. I bought one for $94, back during the SSD price crash of 2023.
Outside the US, stores like Amazon are pointless - because the warranty policy excludes a convenient and quick replacement, and in accordance with local consumer legislation. In the US, there are direct representatives and there is an opportunity to file claims, including against the manufacturer, but not in other countries. Therefore, buying such goods on Amazon outside the US (and some European countries) is essentially an adventure with huge risks, because the reliability of one copy (and the chances that the goods will be sent in a sealed factory form, without damage during delivery) is 50/50. The cost of shipping with insurance, as well as customs duties, cross out the interest in shopping there, among an adequate part of the population. That is why, from my own observations, I do not see easy "walking" availability of such a product in most countries of the world. Therefore, P31 Gold is definitely an extremely scarce product on the planet, with a working, local 5-year warranty with a minimum of risks. And no other manufacturer from the big five simply has analogs. And even more so on the 4.0+ bus for laptops.
I was curious whether you're right about it being unrivaled on power consumption, so I had a look around. I'm not sure these are the best examples, but I found two PCIe 4.0 drives that achieve close to the same average power consumption and trounce it on efficiency (i.e. due to being faster drives, overall). I've summarized the results in a table. Each drive name is a link to the corresponding review, from where I got the data.
The two models of drives you were given are low-end in terms of performance and are definitely not suitable as a system drive, since they do not have a DRAM buffer, which immediately affects the stability of performance under mixed loads. And SiliconPower also has a bad reputation for quality stability in batches and entire series. 990Evo is a very weak SSD, with a clearly overpriced price.
I omitted the idle power stats, because I have it on good authority that the numbers with ASPM are all just a few mW and not worth worrying about. If you're concerned about laptop battery life, you'd enable ASPM (if the BIOS even gives you a choice - otherwise it'll be on by default).
I clearly wrote that I am not very concerned about the battery life, but there are obvious problems with support for both double-sided models in low-profile M.2 slots of office and business series, and the low power of such slots, which leads to spontaneous failures of hot series during peak power (current) surges. And these problems, according to customer reviews, are present even on a number of desktop boards, for example, with the KC3000 series, which at peak consumes up to 9.5 W and more than 10 W in the 4TB version. Not all M.2 slots are capable of consistently providing such power and the situation is especially deplorable in laptops, where architectural problems with constant overheating due to obviously inadequate cooling are added to this. Which leads to much greater failures in such hot series. And who is to blame for this, except for the manufacturers of SSDs and laptops / desktop boards? The average consumer should not think about such nuances. It's like how laptop manufacturers often lie to consumers about the presence of an HDMI 2.1 port thanks to fraudulent assumptions by the HDMI consortium, which allowed such nameplates to be glued to laptops with a 2.0b port. Sometimes lawyers force marketers to add TMDS (meaning it's 100% a 2.0b port)/FRL at the end, but this again does not indicate support for FRL6 (48Gbps). Should an ordinary consumer suffer because of this deliberate marketing chaff and deception? And then I see cries on forums - where are my 48Gbps, when a buyer tries to connect the latest display hardware to it, because the manufacturer wrote port 2.1! Well, there is approximately the same hellish, deplorable situation with M.2 slots on laptops and even desktop motherboards - the slot power limit and maximum current are not indicated anywhere. And also the ability to connect double-sided models and the maximum possible cooling capacity (in laptops) without an additional radiator.

I wonder where you get this stat, because I've never seen a SSD fail. I think I heard about a couple failures at work, in desktop machines well over 5 years old, but that's still a 95% or better reliability rate at drives with 5 years of use or more.
I have several failed SSDs at home and some under warranty that I haven't gotten around to yet. For example, I had a problem with one - it didn't have encryption for a number of reasons, but it stored operational backups and a cache of the most important private data. It failed last year, but switched to "read only" mode, which doesn't allow me to return it under warranty until I resolve the legal issue of preliminary destruction of this disk or erasing my private information before handing the disk over to the seller (and he to the manufacturer). Given the local legislative flaws, this is a rather complicated lawsuit with the seller. Although the existence of a warranty case is unconditional and the seller is obliged to return my money. But the law requires the return of a defective product, which in the case of storage devices has a legislative gap in the return procedure.

SSD failures are common. For example, Samsung has a typical defect rate of about 0.5% in batches according to statistics from large retail chains. That is, approximately every 150-200 buyer encounters a warranty case. For example, AData has a defect rate of up to 5% in batches (especially in cheap series) and reviews roughly correspond to this, i.e. about every 20th buyer faces SSD failure. This is almost 10 times higher than Samsung. And so on. And you can also remember 2020-2021, when Samsung had a massive defects in the 870 Evo series. Or look for 990Pro failures. And how one person who bought same 980 Pro series (this was in the mass media) tried to get his money back, refusing to return the disk itself, because it contained, as in my case, unencrypted private data. As a result, Samsung allowed him to destroy the disk under the video recording of how he does it, but only after a public scandal, before the public outcry, they refused to provide him with warranty service until he not returned the disk.

Therefore, your statement that SSDs are very reliable is a priori incorrect. You are simply not as experienced as I am in many aspects, including legal issues of this kind.


The launch review of the 990 Pro tested low-power mode. It doesn't seem to perform much worse, but also barely saved any power. Perhaps the most notable difference is the idle power (with ASPM disabled). See for yourself:
I wrote about this above succinctly and clearly. If there is no physical limitation on power (current), such a mode is a priori useless for laptops. Since the 990Pro does not have a real drop in peak power and current within the limitations of the M.2 slots of a number of laptop models, then there is no use from it - you simply cannot put it there, despite the fact that it is one-sided.

They have temperature sensors, although I doubt the throttling thresholds are configurable and that doesn't necessarily help you save battery.
Let me emphasize once again - in the perspective I have clearly described - this is not a problem of autonomous operation. In general, the ability of such series to work in mass models of laptops - they cannot work there normally due to high consumption and because of the extreme heating without radiators, which cannot be installed there, and because of the peak power (current) beyond the capabilities of the M.2 slots. And this is a 100% mass problem, the impossibility of installing a new high-performance disk with a DRAM buffer for 2-4-8 TB, instead of a slow DRAMless, but quite cold, chosen by the laptop manufacturer. And while there is only P31 Gold with a DRAM buffer on the planet, it is a unique series made specifically for laptops. Samsung does not even try to make such a series, having more than 1/3 of the world SSD market under its control. And it is very sad that it does not care about users of mass laptops. SSD failures in laptops due to attempts to install such series there (in the absence of a choice, as described in detail above) are much more common than in desktops, since there users usually install either a separately purchased radiator or a factory one from the motherboard. And there is always the possibility of effective forced airflow of such slots, including an additional case mod, which is quite easy and cheap to do yourself. But even in desktop slots with such airflow, owners often complain about problems with overheating. And the JEDEC standard does not recommend exceeding the temperature on NAND chips above 35C constantly, otherwise the data storage time begins to drop exponentially and the higher the wear, the faster the reading speed and storage time drop...

Why do laptops indicate such nonsense as dgpu power for gamers, but do not indicate really important electro-mechanical parameters for mass working solutions? This is abnormal and absurd.

For example, why don't they specify PL2/PL1(Intel) and FPPT/SPPT/SPL(AMD) for processors + igpu? At the same time, each laptop has a different level of cheating from manufacturers in terms of consumption. And a naive buyer taking another laptop with, for example, the advertised i9 14900HX does not even suspect that in 2 different laptops, its performance can actually differ (as in the case of dgpu with different TDP) up to 1.5 times or more. Why did not Intel/AMD oblige all laptop manufacturers to specify these factory limits (taking into account the installed cooling system and its possible limits), but NVidia obliged them all to do this? What is the reason for such duplicity? And the same goes for ports - their power, capabilities - there is always a technological fog for buyers. Try to find out the maximum current for usb-a/c ports and capabilities. And often manufacturers lie brazenly and competent people like me can clearly see the deception in the datasheets for laptops and their components.

A simple recent example with Lenovo laptops with the new AM(OLED) panels - they declared support for DisplayHDR True Black 400-1000 in a number of models, but immediately indicated in the description that the maximum contrast is 100k:1. This clearly indicates that the DisplayHDR True Black 400+ nameplate is fake (example for new Yoga series with fake DisplayHDR True Black 1000 - declared contrast only 100k:1, not minimum as 2M:1), since this nameplate ("True Black HDR" part) requires a maximum black level of 0.0005 nits, which gives a minimum contrast at a brightness of 400 nits - 800k:1, i.e. the real black level of the new AM(OLED) Lenovo screens is 8 times worse than the minimum tolerances in accordance with the standard they specified by this nameplate in psref. How many consumers of their models understand this? Apparently, so far I am the only one...
 
Last edited:
The two models of drives you were given are low-end in terms of performance and are definitely not suitable as a system drive, since they do not have a DRAM buffer, which immediately affects the stability of performance under mixed loads. ... 990Evo is a very weak SSD, with a clearly overpriced price.
The Samsung 990 Evo Plus is what I linked, not the regular Evo. In the benchmark I linked, it performed reasonably well and the reviewer mostly just complained about the price, but said:

"It’s competitive in terms of performance and power efficiency and has no glaring weaknesses. It’s single-sided, is offered at up to 4TB, and has the upgraded pSLC cache of the 990 Pro without having the usual large-cache drawbacks. If we had to point out any weaknesses, it’s that the drive arrives a little late and the current pricing is a little high — though at least it's lower than Samsung's 990 Pro on the 2TB model. We can safely recommend it for use in any sort of system whether laptop, desktop, or console, although there are less expensive options out there in most cases."

https://www.tomshardware.com/pc-components/ssds/samsung-990-evo-plus-ssd-review/2

there are obvious problems with support for both double-sided models in low-profile M.2 slots of office and business series, and the low power of such slots, which leads to spontaneous failures of hot series during peak power (current) surges. And these problems, according to customer reviews, are present even on a number of desktop boards, for example, with the KC3000 series, which at peak consumes up to 9.5 W and more than 10 W in the 4TB version.
All drives I linked are single-sided. In my table, I included max power from the linked reviews. The Samsung Evo Plus got the highest, at just 4.56 W. Unlike the P31 Gold, that was at PCIe 4.0 speeds.

Not all M.2 slots are capable of consistently providing such power and the situation is especially deplorable in laptops
Then they're nonconformant. According to the spec, slots should provide up to ~15W.
Are there badly-designed laptops out there? I can believe that.

I think what NVMe probably needs (if it doesn't already have) is a scheme for negotiating power consumption, similar to USB. The host can then limit how much power the SSD pulls, not only based on its ability to supply the slot with power, but also things like its thermal management and overall power management scheme.

there is approximately the same hellish, deplorable situation with M.2 slots on laptops and even desktop motherboards - the slot power limit and maximum current are not indicated anywhere.
Similar to the way PCIe add-in-slot form factor dictates that a x16 card can draw up to 75W from the slot, the M.2 specification allows for 15W. If the slot can't do that, then it's nonconformant. Simple as that. This simplicity takes a lot of headaches out of trying to figure out whether a card or SSD is compatible with a given motherboard. If it has the slot and your PSU is big enough, then that's all you need to worry about (electrically). Cooling is obviously another matter, but that's not primarily in the control of the motherboard maker.

SSD failures are common. For example, Samsung has a typical defect rate of about 0.5% in batches according to statistics from large retail chains.
It sounded like you were saying the chance of failure at 5 years was 50%, which is what I take issue with. If you're now saying its a couple orders of magnitude less, I can see that.

Or look for 990Pro failures. And how one person who bought same 980 Pro series
Yeah, the firmware bug in the 990 Pro was indeed quite bad, but caught quickly after launch. I own two 990 Pros with zero problems, but I bought them in 2023 and 2024, making sure to update the firmware before I put them in service.

IMO, the problem with the 980 Pro was far worse, given how long the drives were being sold before it came to light. I'm glad I dodged that bullet. For a long time, I avoided Samsung SSDs, as it seemed their reliability wasn't quite as good as Crucial or Intel's. It's only in the past few years that I've started using them.

Therefore, your statement that SSDs are very reliable is a priori incorrect. You are simply not as experienced as I am in many aspects, including legal issues of this kind.
Reliability is about aggregates. In this case, only statistics matter - not anecdotes (no matter how lurid or absurd, even if true). Anyway, I take no issue with your more recent statements about SSD reliability. So, let's move on.

I wrote about this above succinctly and clearly. If there is no physical limitation on power (current), such a mode is a priori useless for laptops. Since the 990Pro does not have a real drop in peak power and current within the limitations of the M.2 slots of a number of laptop models, then there is no use from it - you simply cannot put it there, despite the fact that it is one-sided.
I never made any claim about the 990 Pro's suitability for laptops, in any mode. You just raised questions about its low-power mode, so I thought you might like to see some actual data on that. Not every part in my replies is meant to be a point of dispute.

even in desktop slots with such airflow, owners often complain about problems with overheating.
Dissipating even the 12W actually generated by the fastest PCIe 5.0 SSDs really shouldn't be that hard, as long as it's got a decent heatsink and your case has decent airflow. All of the high-performance PCIe 5.0 drives I've seen either include a heatsink or a warning that you must use it with one.

And the JEDEC standard does not recommend exceeding the temperature on NAND chips above 35C constantly,
Really? Can you please tell me where to find this statement?

I have seen tables showing data retention and cell wear-out rates as a function of temperature, but it's both dependent on a specific generation of NAND flash (i.e. provided by the NAND manufacturer - nothing JEDEC dictates) and the stable long-term average temperature tends to be considerably higher than 35 C. That's awfully low, for such a limit. Not just in laptops, but even PC cases with decent airflow can exceed that without much trouble, on a warm day.

For example, why don't they specify PL2/PL1(Intel) and FPPT/SPPT/SPL(AMD) for processors + igpu? At the same time, each laptop has a different level of cheating from manufacturers in terms of consumption.
I'm curious what you can provide as a source for the Intel claim, because I thought PL1/PL2 was supposed to be package power. I have a N97-based mini-PC and if I hit the CPU cores + iGPU with a heavy workload, the amount of power it's drawing at the wall can go as high as 60 W, in spite of it claiming the CPU package power is only 1/4th as much. So, if you can confirm that PL1/PL2 don't apply to iGPU, I'd appreciate that.

The only other thing that can be using variable amounts of power is the dual-rank SO-DIMM of DDR5-4800, but there's no way it can make up such a difference. So, I suspect you're right. I'd just like to see some official or independent confirmation of that, if you're aware of any.

a naive buyer taking another laptop with, for example, the advertised i9 14900HX does not even suspect that in 2 different laptops, its performance can actually differ (as in the case of dgpu with different TDP) up to 1.5 times or more.
A naive buyer should look at reviews of specific models of laptops and just buy one of those exact configurations they see reviewed. That would be my advice to them. You're right that there are too many variables laptop makers can fiddle with, for buyers to be able to reliably guess much about how different configurations will perform and behave, across different brands and specs. It's not a nice problem to have, but what else can you do?

Why did not Intel/AMD oblige all laptop manufacturers to specify these factory limits (taking into account the installed cooling system and its possible limits),
Because laptop makers want the ability to differentiate their products or reuse parts between different models or from one generation to the next. Competition between Intel and AMD means that neither company has the ability to be a dictator and enforce rigid standards like you seem to want.

but NVidia obliged them all to do this?
GPUs can use a lot more power than CPUs, especially if you look back further in history. So, the need is clearly understood. Furthermore, Nvidia own the dGPU market in a way that gives them leverage to dictate things to their partners. If AMD, Intel, etc. were more competitive in this market, we might see a similar amount of flexibility and bending of rules & guidelines.

Try to find out the maximum current for usb-a/c ports and capabilities.
At least they're not bending the standard, in this case. The standard properly allows for this flexibility, so long as the port accurately reports its real capabilities to the connected devices.
 
Last edited:
The Samsung 990 Evo Plus is what I linked, not the regular Evo
it is no better, because it quickly sags in mixed performance and behaves poorly under high mixed load. And in linear recording it is significantly slower than the Pro series outside the pSLC cache. And who also the difference in price (at least outside the US and Europe) between this series and the 990Pro is such - that there is no point in buying the 990Evo series. It is easier to pay extra for the 990Pro.

The Samsung Evo Plus got the highest, at just 4.56 W. Unlike the P31 Gold, that was at PCIe 4.0 speeds.
This is not correct data. Because the figures given are averaged. There is no data on peak power surges (current) in the circuits, which are what lead to spontaneous failures (and growth of the emergency power loss counter in the smart) on laptops and motherboards, which, according to you, do not comply with the M.2 slot standard.

Unlike the P31 Gold, that was at PCIe 4.0 speeds.
P31 is made for 3.0 bus. Moreover, there is direct evidence in reviews on the Internet that this series works in M.2 slots of Chinese miniPCs, while series like KC3000 refuse to work there, despite the fact that the slots are version 4.0. At the same time, after additional inquiries to the Chinese, it turned out that the slots cannot provide power more than 7-8W or 10W in different models, which only proves the presence of problems and the absence of competitors on the planet for P31, which is in massive shortage - otherwise it would be lying freely on the shelves of all countries with a 5-year warranty, like hot disks from other manufacturers.

Then they're nonconformant. According to the spec, slots should provide up to ~15W.
Please provide the source of your data with a link to the official M.2 slot standard datasheet. I have not seen such data online. Or I could not find it quickly enough.

If there is an official standard and laptop manufacturers clearly violate it (or desktop motherboard manufacturers), then this is a direct reason for consumer lawsuits. Not specifying the power in the laptop or motherboard datasheet is not required in this case, since they are required to comply with the standard, since they indicated support for M.2 slot 2280.
As well as the obligation of direct compatibility with previous PCIE bus standards, if the number of lines is compatible and the power supply (and all consumer M.2 drives fit into 15W, as you claim)
If the slot can't do that, then it's nonconformant. Simple as that. This simplicity takes a lot of headaches out of trying to figure out whether a card or SSD is compatible with a given motherboard. If it has the slot and your PSU is big enough, then that's all you need to worry about (electrically). Cooling is obviously another matter, but that's not primarily in the control of the motherboard maker.
Tell this to the owners of laptops where double-sided models do not fit (and this should also be a standard - the height of the slot lift) and where high-power series refuse to work, even if it fits there and even if it is single-sided. If the buyer does not think about the lies of laptop and motherboard manufacturers, then he will definitely get into a bad situation in the future when trying to simply replace the disk with a new and faster one, because according to you, he should not think about this at the stage of purchase, since everyone is obliged to comply with the standard. And where is it written that they are obliged to comply and what sanctions will be imposed on them if they deliberately do not do this and this is provable?

Let me also remind you that support for 15W for each M.2 slot requires 30W only for 2 slots, while a number of laptop models are supplied with a 65W PSU. For example, the Acer Aspire 5 57 series has 2 slots and only one processor can consume up to 60W at peak. Obviously, this is completely non-compliant with standards and it is obvious that the PSU there should be at least 120W, no less, especially considering that there is a TB4 port, which itself requires support for powering external devices up to 100W (according to marketing advertising).

100+30+60+40W for everything else - 220W minimum, even for a cheap office laptop. Isn't that right?
It sounded like you were saying the chance of failure at 5 years was 50%
Why do you always distort the essence of my comments? I have written that the probability of failure of one unit is always 50/50 for the owner. Do you have evidence to the contrary? This is exactly why a 5-year warranty saves, because such advertising parameters as MTBF (or AFR) are useless for retail buyers (by the way, Samsung's MTBF is much lower (1.5M hours) than other (2M) manufacturers, if you believe these figures, of course) - they care whether their unit will fail or not, and this is always a 50/50 probability. Either it will fail or not.

Statistics from retail chains show that the reliability of Samsung drives in batches (with a number of obvious failures in the past), in general, is much higher than the market average. This is exactly why they are bought at an increased price. And this is exactly why they have more than 30% of the NAND and SSD market. But this does little to give a buyer with 1 unit - only the presence of a guarantee solves the problem of failure.
Anyway, I take no issue with your more recent statements about SSD reliability.
Do you admit that SSDs are not as reliable as you claimed earlier, taking into account your personal statistics? And a comfortable working warranty of 5 years, in the place where you personally live (which is where my angle on this topic began) is of decisive importance? And so my statement about the shortage of cold (low-power), but extremely efficient in terms of performance SSD series for the same laptops, on the planet is completely obvious, especially in the context of the deliberate creation of laptop models that do not meet the M.2 slot standard for power supply of at least 15 W, a link to which you have not yet indicated?
I never made any claim about the 990 Pro's suitability for laptops, in any mode. You just raised questions about its low-power mode, so I thought you might like to see some actual data on that. Not every part in my replies is meant to be a point of dispute.
I didn't accuse you of such a statement, I just emphasized that this series, despite the one-sided design (which allows it to be easily installed in office models of laptops with low-profile (again, the question is - is it _in_ or _out_ of_ the standard for M.2 slots?) may simply not work reliably in a number of such models due to increased consumption (current) at peak moments. And this is not specified anywhere for the average naive consumer on the planet. And I have seen all the reviews a long time ago. There was no subject of dispute here, but there was my statement based on empirical experience.
Dissipating even the 12W actually generated by the fastest PCIe 5.0 SSDs really shouldn't be that hard, as long as it's got a decent heatsink and your case has decent airflow. All of the high-performance PCIe 5.0 drives I've seen either include a heatsink or a warning that you must use it with one.
In the context of my story (if you remembered it) there was no talk about 5.0, there was talk about problems with 4.0 drives, even with standard radiators of desktop motherboard manufacturers - according to the owners' reviews, they often clearly overheated above the levels that they considered normal in terms of reliability of long-term use of these models. Not all had a 5-year warranty, not in all countries. But the price was always high when people worried about it...

eally? Can you please tell me where to find this statement?
This follows from the JEDEC standard for consumer drives, for which it is better not to exceed the ambient temperature of 35C and no more than 8 hours per day in the on mode. It is with such parameters that the standard obliges manufacturers to ensure the safety of data for at least 1 year with 99% wear within the specified warranty resource (I emphasize - warranty, not physical, which has 2 more levels).

For example, the firmware of Samsung drives does not consider the warranty resource, but the resource of the 2nd level, taking into account WA (write amplification).
For example, Samsung guarantees 600TBW on 1TB drives. This is according to the JEDEC standard up to 35C, 8 hours a day, at 99% wear for at least 1 year of data retention (stable peak read speed is NOT guaranteed in this standard after this storage time - even if it drops 10 times, as on many flash drives after 3 years with only 0.5% wear...).
In SMART parameters, wear is indicated relative to level 2 in 1500 cycles already taking into account the WA gain and in practice it is significantly higher than 600TBW (1PBW+), but this is NOT a guaranteed resource, but beyond it with an unpredictable data storage time.

For example, the Chinese with YMTC chips have recently taken to declaring false 5050 cycles, etc. nonsense for 3D TLC. In reality, their flash, especially low grade, does not withstand even 10-20 cycles in all sorts of garbage second-class series, which are lying around in bulk on all sorts of bazaars on the Internet. But the stupid majority of the poor population of the planet easily buys into this. I also noticed that under the pressure of this vile tendency, a clearly fraudulent trend has begun to increase the recording resource in disks of fairly well-known second-tier manufacturers and even first-tier ones, like AData. Obviously, this is the same brazen cheating as Intel did with processor consumption before switching to TSMC factories with the failure of technological processes, when it is no longer possible to win qualitatively directly from AMD...

I'm curious what you can provide as a source for the Intel claim, because I thought PL1/PL2 was supposed to be package power. I have a N97-based mini-PC and if I hit the CPU cores + iGPU with a heavy workload, the amount of power it's drawing at the wall can go as high as 60 W, in spite of it claiming the CPU package power is only 1/4th as much. So, if you can confirm that PL1/PL2 don't apply to iGPU, I'd appreciate that.
These power parameters in pulse (short-term operation mode) and in long-term mode have long been known to everyone. As well as similar modes for AMD, with the only difference that the PL1 analog there occurs a little later after the intermediate stage of SPPT - in fact, PL1 is equal to the SPL mode of AMD. In all reviews above the primitive level, these levels are necessarily indicated (for AMD, to simplify the picture, they are also called PL1 / PL2, although above I indicated the nuances of the differences with Intel power modes) They can be easily found out after checking the laptop in operating mode. But manufacturers deliberately hide these numbers from buyers (as they previously deliberately hid TDP for dgpu NVidia / AMD, until they brought order forcibly). What prevents only 2 companies on the market - the oligopoly Intel / AMD from forcing laptop manufacturers to indicate these parameters in specifications, as in the case of dgpu? Nothing but a desire to cheat together with laptop manufacturers. After all, it will be impossible to sell a laptop that is officially 1.5 times slower than a laptop with exactly the same processor for comparable money. But for some reason no one cares, although this has a much stronger impact even on the business environment than on idiotic games, where order was quickly restored!
A naive buyer should look at reviews of specific models of laptops and just buy one of those exact configurations they see reviewed. That would be my advice to them. You're right that there are too many variables laptop makers can fiddle with, for buyers to be able to reliably guess much about how different configurations will perform and behave, across different brands and specs. It's not a nice problem to have, but what else can you do?
And if there are no reviews? And why does a naive buyer owe you anything? In my opinion, the buyer is owed for what he pays. Isn't that right? You've turned everything upside down, forgetting who is in charge, and the buyer is undoubtedly the main one. The buyer is simply being deliberately deceived in the most fraudulent way, with real consumption levels in pulse mode and long-term multi-threaded mode, which means they are deceived in real and with the performance of the same chips in different laptops. Most technically illiterate buyers are taken in by the processor brand nameplate, completely unaware that the difference in performance can be huge in different laptop (or miniPC) models. And there is also the noise factor from the cooling systems, which also introduces confusion in the choice, since there is an absurd situation when one laptop model with greater performance is also quieter, although formally it has exactly the same processor. And if we recall your statement about the key influence of RAM latency - there are even more nuances...

Try right now to point me to a model for real-time work with sound and video. I know the realities of this market.

Because laptop makers want the ability to differentiate their products or reuse parts between different models or from one generation to the next. Competition between Intel and AMD means that neither company has the ability to be a dictator and enforce rigid standards like you seem to want.
All these statements are nonsense, even refuted by your own statements and reasoning above. As well as by me.

GPUs can use a lot more power than CPUs, especially if you look back further in history. So, the need is clearly understood. Furthermore, Nvidia own the dGPU market in a way that gives them leverage to dictate things to their partners. If AMD, Intel, etc. were more competitive in this market, we might see a similar amount of flexibility and bending of rules & guidelines.
Again nonsense - 14900HX in laptops can officially consume up to 150W, which is clearly comparable with older dgpu models. Even the fastest processor on the planet for laptops 7945HX can consume up to 120W in real models. Having as a companion 4050 with a peak consumption of 100-110W, but which clearly indicates the TDP in the specifications of all manufacturers without exception. But not 14900HX and 7945HX, which even at 100-110W consumption is faster than 160-170W 14900HX - proven by real tests...

At least they're not bending the standard, in this case. The standard properly allows for this flexibility, so long as the port accurately reports its real capabilities to the connected devices.
Naturally, if the standard is initially made fraudulent, as in the case of HDMI ports, etc. You will not find any minimum requirements for ports -A or -C in the standard specifications, only recommendations - 0.45A for -A and 0.95A for -C. And that's it, and the rest the consumer will have to dig around in forums for a long time and in vain (if he has time for this and sufficiently developed critical thinking - again, a mass problem of its absence) or ask meaningless questions to the "tech support" of the manufacturers, which either ignores such questions or answers with stupid templates without any specifics, because the company's lawyers forbade them to do this in order to avoid mass class action lawsuits due to obvious discrepancies (if all specifications are clearly declared) of mass batches, marketing assurances...

I have been through all this personally for years, I consolidate thousands of buyers on various technical forums, including legal advice. Making hundreds of requests to different manufacturers on my own and most often not receiving any answers on the essence of the questions asked. And sometimes after such requests, they quietly changed the user manuals (retroactively replacing the characteristics in the already supposedly published ones, without changing the date of the changes) and the data there, realizing that they got into a difficult legal situation after my explanations. Without any sane apologies from them...

That is why public standards and control of manufacturers in terms of forced compliance with these standards are so important. And there is no empty space. If someone refuses to produce in such conditions, others will quickly appear in their place. If the standards are adequate and written taking into account common sense and the exclusion of any fraud. For decent and conscientious manufacturers, this is clearly not a problem - to comply with such standards. This is an obstacle only for outright scammers who can sell a certain product only by keeping its real characteristics as silent as possible...
 
it is no better, because it quickly sags in mixed performance and behaves poorly under high mixed load. And in linear recording it is significantly slower than the Pro series outside the pSLC cache. And who also the difference in price (at least outside the US and Europe) between this series and the 990Pro is such - that there is no point in buying the 990Evo series. It is easier to pay extra for the 990Pro.
The professional reviews I've seen on the Evo Plus say it's a solid all-around drive. If you need something that's lower power and more efficient than the 990 Pro, it clearly fits that description. As far as I can tell, it's just you who's saying it's bad.

P31, which is in massive shortage - otherwise it would be lying freely on the shelves of all countries with a 5-year warranty, like hot disks from other manufacturers.
Solidigm announced they've left the consumer SSD market. Presumably, that extends to SK hynix, though I think they didn't explicitly say.

Please provide the source of your data with a link to the official M.2 slot standard datasheet. I have not seen such data online. Or I could not find it quickly enough.
I did link my source. I do not have access to the official M.2 specification, since I'm not a member of the PCIe SIG.

Let me also remind you that support for 15W for each M.2 slot requires 30W only for 2 slots, while a number of laptop models are supplied with a 65W PSU. For example, the Acer Aspire 5 57 series has 2 slots and only one processor can consume up to 60W at peak. Obviously, this is completely non-compliant with standards and it is obvious that the PSU there should be at least 120W, no less, especially considering that there is a TB4 port, which itself requires support for powering external devices up to 100W (according to marketing advertising).

100+30+60+40W for everything else - 220W minimum, even for a cheap office laptop. Isn't that right?
Perhaps they're expecting the laptop battery to kick in some current, when the AC power supply has been exceeded.

Why do you always distort the essence of my comments?
I try to interpret your comments as I understand them.

I have written that the probability of failure of one unit is always 50/50 for the owner.
That doesn't match my experience, observations, or what most others seem to experience. If I had a 50% failure rate of my SSDs, I would still be using hard disks!

Do you have evidence to the contrary?
Have a look at this:


This is exactly why a 5-year warranty saves,
Warranties are priced according to failure probability. If SSDs were as unreliable as I think you're saying, then drives with 5 year warranties would be way more expensive than drives with shorter warranties.

This follows from the JEDEC standard for consumer drives, for which it is better not to exceed the ambient temperature of 35C and no more than 8 hours per day in the on mode. It is with such parameters that the standard obliges manufacturers to ensure the safety of data for at least 1 year with 99% wear within the specified warranty resource (I emphasize - warranty, not physical, which has 2 more levels).
How do you know this? Where did you read it?

Again nonsense - 14900HX in laptops can officially consume up to 150W, which is clearly comparable with older dgpu models.
I said "historically". In laptops, CPUs don't have a very long history of using so much power. Anyway, that was just conjecture, on my part. Feel free to disregard it.
 
I did link my source. I do not have access to the official M.2 specification, since I'm not a member of the PCIe SIG.
Luckily I am (well, my employer is), and can confirm it's about 15W. M.2 uses 3.3V power, 0.5A per pin. The M.2 socket 3 (M key, the type used for PCIe M.2 SSDs), uses 9 power pins. So 3.3 * 0.5 * 9 = 14.85 W.

That's continuous, it's up to 1A/pin for <100 microsecond.

Edit: Looks like there's another version of the connector too, M.2-1A, rated for 1A/1.2A continuous/peak. Didn't know that was a thing, no idea if/where it's being used.
 
Last edited:
  • Like
Reactions: bit_user
As I mentioned, I have a couple SK hynix P31 Gold SSDs. A 500 GB model is currently installed in a mini-PC that I mostly use as a headless media server. After its CPU has been idling for a good while, I checked and the temps I'm getting (as reported by Linux` sensors) are:
  • CPU cores & package: 29 C
  • NVMe sensor 1: 34.9 C (probably one of the NAND chips)
  • NVMe sensor 2: 37.9 C (probably the controller)

Now I'm pretty sure ASPM is disabled, since this is an industrial motherboard and it would be weird for them to force it on (I didn't see a BIOS setting for it). Still, according to the review I linked (see page 2 of that article), the 500 GB P31 Gold idles at just 348 mW with ASPM disabled.

The SSD is getting some decent amount of airflow, although it's right next to the N97 CPU. However, when the CPU package is 29 C, there's no way much waste heat is hitting that SSD. The ambient room temperature is just above 20 C, near the floor where it's sitting.

So, the notion that SSDs shouldn't be hotter than 35 C, or that such a thing is even typical of the P31 Gold seems pretty far out-of-touch with reality!
 
  • Like
Reactions: TJ Hooker
If you need something that's lower power and more efficient than the 990 Pro, it clearly fits that description. As far as I can tell, it's just you who's saying it's bad.
The 990 Evo Plus is sold at retail at a fraction of the price of the 990Evo, which has been branded on all review sites as a bad SSD and extremely overpriced. The 990 is sold almost nowhere precisely because it is more expensive than the Evo and is already extremely close to the Pro - what is the point of ordering its batches if buyers will still choose the Pro or Evo (by the way, Evo is almost never sold in our local retail chains - no one is interested in it. It is too expensive, and Plus is almost absent where there is a reliable warranty).

In addition, Samsung modestly kept silent about the peak consumption on its website (the average is useless) unlike the 990Pro. Therefore, it is unknown whether this series will also work stably in the weak current M.2 slots of office laptops. Well, a disk without a DRAM buffer - I remind you again - is the worst choice for a system disk, and there it is often the only slot.
++++++
And most importantly, the 990 Evo Plus has a 2-lane 5.0 bus, which immediately negates its adequacy for laptops with 4.0 (99% of the market) and 3.0. So this example is extremely useless in laptops, especially as a system drive. Apart from the P31, there is essentially no choice with a DRAM buffer in SSD versions for 4.0 bus. Why did Samsung do this? Apparently, its marketers did not want this model to be installed in old laptops, forcing people to upgrade to new ones, but the new ones will have x4 slots on 5.0 - why would anyone need a 2-lane drive 5.0, and even with such low performance(4.0) as a system drive?
++++++
Perhaps they're expecting the laptop battery to kick in some current, when the AC power supply has been exceeded.
So you think that when the 65W PSU is not enough, the laptop will take 140W from the battery? This is fantastic, although I know many models where they did this quiet meanness for the buyers. Even "gaming" - where it instantly and quickly kills the battery. Although I do not know laptop batteries that can give out more power than the PSU stably and for a long time...

Thus, I proved logically that the M.2 slots in many laptop models, especially office ones, are initially created with a deliberate violation of the M.2 standard and nothing is said about this in their specifications, which, by the way, can really lead to class action lawsuits against manufacturers. But so far, lawyers do not see a major benefit in this...

That doesn't match my experience, observations, or what most others seem to experience. If I had a 50% failure rate of my SSDs, I would still be using hard disks!
You are talking nonsense again. I never wrote that the probability of your drives failing is 50%. I wrote that the probability of one drive failing is always 50/50. It will either fail or not. And only the warranty saves. You cannot predict in any way that you will be unlucky enough to get even 0.5% of defects in batches with best Samsung. And this is the best option according to statistics of retail chains (WD is slightly worse). With other manufacturers, especially the second tier, everything will be worse by several times and even an order of magnitude (5-10% in batches and more). Therefore, buying an SSD without a 5-year warranty, especially expensive models for 2TB+ and above, is complete madness. 50/50 for one copy. Always remember this.

How do you know this? Where did you read it?
JEDEC for consumer's SSD.

I said "historically". In laptops, CPUs don't have a very long history of using so much power. Anyway, that was just conjecture, on my part. Feel free to disregard it.
Intel started cheating on power consumption more than 5 years ago. NVidia ordered everyone to publish TDP later. What prevented the Intel/AMD oligopoly from doing the same, so that there would be no cheating with consumption in different laptops? Apparently they did not care about reputational losses, as the "monopolist" NVidia did. Don't you think this is strange?

hey list 40C/8hrs

True. My mistake. But almost no one has even 40C in reality. In reality it is always much higher. This means that no SSD manufacturer (except for 2.5" models) in modern pcie-e knows that their models will not work within the specified tolerances, which means that their failures will occur much faster.

I have seen on the Internet on sites accumulating customer reviews from various sources that, for example, 980Pro, under intensive work, in reality begins to fail after writing about 50-70TB (which is outside the typical household load even for 5 years, although W10/11 writes too much to the disk compared to W7 - about 2-2.5 times more, at least). One buyer took 3x2TB 980Pro for working with non-linear video editing at home and all 3 failed when writing only 55-100TB. And 600 is declared... At the same time, most consumers, if they reach the recording level of 60-70TB, then after the end of the 5-year warranty with normal use - surfing, video, games...

That's continuous, it's up to 1A/pin for <100 microsecond.
Which again is not done on many motherboards and in laptops. Hence the negative reviews with spontaneous failures of disks in such slots. If all the hardware was strictly implemented within the limits of the standards, even the complete power supplies would immediately prove it - by a sharp increase in power several times (and therefore weight). Which most manufacturers of those laptops really do not want to do. Even Apple does not comply with the standards.
So, the notion that SSDs shouldn't be hotter than 35 C, or that such a thing is even typical of the P31 Gold seems pretty far out-of-touch with reality!
This is what I tried to show - there is almost no real compliance with standards in any desktop and especially in laptops. And this means that the warranty resource in such conditions is a fiction. This resource must be reduced several times depending on the real temperatures in the background and on the disks. And most consumers do not care at all about this. They understand that there is a problem when the disk fails suddenly or taking away all the data at once, unlike HDD, or creating, as described above, a lot of legal problems(by warranty) when it switches to the "read only" mode, if there was no full reliable encryption of the disk (definitely not Bitlocker), which significantly reduces performance (I would not trust the hardware encryption of manufacturers, since there is no audit of their firmware and will not be), therefore, in reality, it is extremely rarely used. And if there is critical private data there. Most of the population in these matters are complete slobs, not thinking at all about the fact that they are returning to the seller (manufacturer) a disk with personal data, which they can extract directly or on special equipment from the manufacturer. This is how huge amounts of personal information leak out through the return of defective SSDs market. And with the advent of SSDs and the "read only" mode, this has become a dangerous epidemic in the data storage industry...
 
Last edited:
This is what I tried to show - there is almost no real compliance with standards in any desktop and especially in laptops.
What you're talking about is not a standard it's one half of the endurance testing. There's a low temperature (what you're talking about) and high temperature (44-79C/50-86C) aspect
which is how they accelerate TBW testing.
I have seen on the Internet on sites accumulating customer reviews from various sources that, for example, 980Pro, under intensive work, in reality begins to fail after writing about 50-70TB (which is outside the typical household load even for 5 years, although W10/11 writes too much to the disk compared to W7 - about 2-2.5 times more, at least). One buyer took 3x2TB 980Pro for working with non-linear video editing at home and all 3 failed when writing only 55-100TB. And 600 is declared... At the same time, most consumers, if they reach the recording level of 60-70TB, then after the end of the 5-year warranty with normal use - surfing, video, games...
Consumer drives aren't designed for constant heavy writing and insufficient cooling will kill them faster. Ease of cooling being one of the many reasons I hate the M.2 format. It's probably a calculated gamble by the manufacturers with their warranty that the number of people who will heavily stress consumer drives is small and they'll just absorb the cost of replacement. I would prefer them to be honest about how they're not intended for consistent long duration heavy write operations so people who are intending to use them such at least know they're tempting fate.
 
  • Like
Reactions: bit_user
It's probably a calculated gamble by the manufacturers with their warranty that the number of people who will heavily stress consumer drives is small and they'll just absorb the cost of replacement.
Yes, apparently so, they just figured that the % of such users who really count on the specified resource and will actually try to use it will be minimal. This is of course a rather risky strategy, but they, with their real statistics of returns, know it better than us. We can only remember that the specified resource may be a complete fiction and check these statements, especially according to the JEDEC method, i.e. at least 1 year of storing "cold" data with 99% (warranted) wear is physically impossible. The test itself will take at least 1 year. You need to rewrite it the promised number of times and leave it on the shelf for a year - who will do that? I have not heard of any inquisitive minds checking new and old SSD models in this way...
And I also don't like how hot M.2 drives are cooled (especially badly in laptops, even "gaming" ones) and how hot they are even in desktops - this is clearly an abnormal situation in general. With 2.5", it was a little easier, especially when the case was metal and there were thermal pads on it - due to its size, it was a natural large radiator. With 2280 everything is much worse, and the chip density is growing...
 
The 990 Evo Plus is sold at retail at a fraction of the price of the 990Evo, which has been branded on all review sites as a bad SSD and extremely overpriced. The 990 is sold almost nowhere precisely because it is more expensive than the Evo and is already extremely close to the Pro - what is the point of ordering its batches if buyers will still choose the Pro or Evo
According to PcPartPicker, the prices of their 2 TB models are as follows:

Looking at their respective price histories, I will grant that the Evo Plus only seems to have settled at this price point more recently.

As for why prefer the Evo Plus? Again, because its power and better efficiency make it more suitable for non- high-performance laptops than the Pro. Also, even $30 is a non-trivial cost differential for OEMs.

Lastly, I'd overlooked the fact that the Evo and Evo Plus are actually PCIe 5.0 x2 drives.

Edit: @thestryker helpfully pointed out that that both of these Evo drives can do x4 at PCIe 4.0.

Well, a disk without a DRAM buffer - I remind you again - is the worst choice for a system disk, and there it is often the only slot.
That's not what professional reviewers are saying about it. And they actually benchmarked it. Where's your data, supporting this claim?

And I don't mean generic data about DRAM-less SSDs, but specifically about the 990 Evo Plus.

And most importantly, the 990 Evo Plus has a 2-lane 5.0 bus, which immediately negates its adequacy for laptops with 4.0 (99% of the market) and 3.0. So this example is extremely useless in laptops, especially as a system drive.
I doubt it matters much whether you can burst at 7.5 or 3.75 GB/s, most of the time. I've used a SATA SSD in my main Windows desktop up until recently, so you won't get a lot of sympathy for me about needing top specs. Most reads are pretty small and writes are usually buffered in DRAM by the OS kernel, no matter what kind of SSD you have.

After @thestryker 's correction, I looked up the read speed of the 2 TB 990 Evo Plus and you can clearly see it's capable of maxing out PCIe 4.0 x4. The way we know it's operating in PCIe 4.0 mode is because the article says their test platform is Alder Lake. Alder Lake doesn't support bifurcation of its PCIe 5.0 lanes down to x4 or x2, so it must be connected via a PCIe 4.0 path.

7CGfovwcwjoBgjz5e5fuLK.png


What's weird to me about your arguments is that you're talking about the limitations of a poorly-designed budget laptop, on one hand, but then you want all specs maxed out on the other. Furthermore, using the P31 Gold as your benchmark, which is much slower and not massively lower-power than the 990 Evo Plus.

Apart from the P31, there is essentially no choice with a DRAM buffer in SSD versions for 4.0 bus.
It's PCIe 3.0. If you plug it into a PCIe 4.0 slot, it will still run at 3.0 speeds.

Why did Samsung do this?
Well, half the lanes is going to run at lower power, which is probably key for PCIe 5.0 applications. Furthermore, the controller & NAND probably can't provide much benefit above PCIe 5.0 x2, so running at x4 would be a pointless waste of power (if not also silicon).

Speaking about x2, in general: some slots in low-end laptops and mini-PCs aren't even connected at x4. My Alder Lake-N board has a M.2 slot that's only PCIe 3.0 x2 connected. I've seen Alder Lake-N boards with only PCIe 3.0 x1 connectivity in their M.2 slot! Made by none other than ASUS (and I think Gigabyte? or maybe ASRock)!

why would anyone need a 2-lane drive 5.0,
Some motherboards have several M.2 slots, but they're not all x4 connected (or won't be, if you populate all of them). For a RAID, at least some of the drives will be running at x2 and the speed of a RAID is limited by its slowest drive.

So, a x2 drive actually makes quite a bit of sense for a RAID that someone is building for capacity or redundancy, and not extreme performance. In a NAS, the throughput is ultimately limited by the network speed. 10 Gigabit Ethernet equates to ~1.2 GB/s, which is no sweat for a x2 slot (at PCIe 3.0 or higher).

So you think that when the 65W PSU is not enough, the laptop will take 140W from the battery?
No, I'm saying it would take the difference between what it needs and what it's getting from the A/C power. My old phone would do this. If I had it on a slow charger and overstressed it too much, the battery would start to deplete and I'd get a warning that it was using more power than the charger could provide.

The other thing it can do is negotiate lower output power of its USB ports.

This is fantastic, although I know many models where they did this quiet meanness for the buyers. Even "gaming" - where it instantly and quickly kills the battery.
You call it mean, but even when the tradeoff is having to drag around a huge power brick? For an actual gaming laptop, they will have a proper PSU. For non-gaming laptops, they will be power or thermally limited. I think gaming on battery has never worked terribly well, in any laptop.

You are talking nonsense again. I never wrote that the probability of your drives failing is 50%. I wrote that the probability of one drive failing is always 50/50. It will either fail or not.
That's not how probability works. The mere fact of having two possible outcomes doesn't make them equally likely, which is what "50/50" means. If the likelihood of failure over some time is 1%, then I guess it would be 1/99 (except nobody says it that way).

If you go through life, treating the probability of all binary events as 50/50, you'd probably either live a very short or very sheltered life. The way we learn to keep ourselves safe is by not doing things with a high probability of hurting or killing us. In some highly dangerous situations, 50/50 would be an underestimate. Treating these as equal probability to other things we need to do would result in unnecessary risk-taking. In most cases, 50/50 is a wild overestimate of risk, which would logically result in someone living locked up in a cave, somewhere. However, cave-dwelling isn't healthy and exposes one to other risk factors. So, the optimal solution is to take a realistic view of probabilities and make careful risk/benefit tradeoffs.

And only the warranty saves. You cannot predict in any way that you will be unlucky enough to get even 0.5% of defects in batches with best Samsung.
Over time, and with several computers or other electronic devices, playing the odds will work in ones favor. This is why it makes sense to have a good strategy (i.e. one that's based on actual probabilities).

IMO, the main reason to buy a SSD with a longer warranty is because it implies the manufacturer has more confidence in its durability. Not because I ever plan or expect to have to submit such a warranty claim. I never buy aftermarket warranties, for instance (although you might get some useful information, if you looked at how their price differs across different models... though I doubt such warranty providers actually adjust their actuarial tables in such detail).

JEDEC for consumer's SSD.
Please provide a link.

for example, 980Pro, under intensive work, in reality begins to fail after writing about 50-70TB
It had an actual firmware bug, which they fixed (somewhat belatedly). Therefore you shouldn't try to extrapolate from that example. It was news because it was an outlier (due to the bug). If you don't understand this, you're going to end up with a very distorted view of the world.

This is why it pays to pay attention to the details and not just scroll headlines and skim the articles. Some news articles do indeed report on shifting trends. More often than not, they're just reporting on an outlier that surprising specifically because it doesn't fit the trends. And, in the drive for clicks or views, the news tends to sensationalize these outliers, which makes it all the more important to penetrate below the headlines and figure out what's really going on.

This is what I tried to show
You have yet to provide a shred of data or a single authoritative source. I don't care how many words you write. I only care about data and what professional reviewers say. If your response is just to type more words, then you're wasting your time.

If you learn to be more data-driven, you will benefit by having a more accurate view of the world. This enables you to make more optimal decisions.

Yes, it takes more time to check your assumptions and try to find good sources for your claims, but you'll learn much more that way. I've certainly caught myself in some wrong assumptions or conclusions, while trying to find a good source on something. So, if you care about actually being right, and not just sounding like an expert, then you really should find it worth your while to fact-check yourself and source your claims.
 
Last edited:
  • Like
Reactions: thestryker
at least 1 year of storing "cold" data with 99% (warranted) wear is physically impossible. The test itself will take at least 1 year. You need to rewrite it the promised number of times and leave it on the shelf for a year - who will do that?
NAND makers and SSD manufacturers can check the low-level electrical parameters of the NAND cells to see how fast wear is occurring. They can put this data into a mathematical model of the degradation curves and extrapolate it to determine the drive's endurance. I'm sure that's how they arrive at the advertised TBW numbers. Their warranty underwriters would definitely require some analysis of this sort.

If you're an industrial customer of NAND chips (or a 3rd party maker of SSDs), I'm sure you can also get detailed wear data under NDA. I've seen some much older data like this, that was publicly available.
 
As for why prefer the Evo Plus? Again, because its power and better efficiency make it more suitable for non- high-performance laptops than the Pro. Also, even $30 is a non-trivial cost differential for OEMs.
I look at the 990 Evo Plus as a spiritual successor of sorts of the P31. The only design difference between them is the DRAM cache which I don't particularly see as important on a drive like this. The P31 benefited from superior NAND to allow a quad channel SSD perform like the eight channel counterparts. The Evo Plus is the same which you can see by comparing the performance of the Evo (they use the same controller). Perhaps when the PCIe 6.0 SSD era hits there will be another bump for NAND and we'll see some efficient PCIe 5.0 x4/PCIe 6.0 x2 drives to pickup the mantle.
 
  • Like
Reactions: bit_user