Intel SSD 660p 1TB Review: QLC Goes Mainstream

Status
Not open for further replies.

Giroro

Splendid
What is the real capacity of the 1TB drive?
I really doubt the "raw" capacity really matches what the user sees, which is what the table says. So how much over-provisioning is Intel using with QLC compared to TLC?

Also, is hitting the endurance limits actually covered by the warranty? Or is Intel just planing on replacing a bunch of drives as they wear out in a year or two? Because, to me, calling 100 TBW endurance poor is an understatement. It's about one third of the TLC drives (which is not the same thing as "33% lower" by the way), and the TLC drives themselves have somewhat mediocre endurance.

Personally, I don't think a slightly lower price is worth the hassle and cost of replacing your SSD three times as often. I could buy 1 x 760p at $140 retail, or 3 x 660p for a total of $300 retail. That math doesn't work out for me. Plus, the 760p of the same capacity will have superior performance, which will be more clear when Toms reviews the 1TB 760p or the 512GB 660p, so there can be a direct comparison.

I think what is most annoying about QLC drives, is that cheap laptop OEMs are going to switch to them almost immediately, slap a 1 year warranty on the whole computer, then you are SOL when the drive wears out in 18 months. Intel will refuse to cover non-retail drives and just refer you back to the laptop OEM. Plus it's usually super annoying to repair/replace/upgrade the SSD in a laptop if you actually need to transfer any data.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
876
394
19,360


It is important to remember how SSD endurance is measured.

The workload consists of a 4K random write to the full span of the drive while it is completely full of data. From an SSDs viewpoint, this is the absolute worst case scenario imaginable. The drive is full, which reduces its ability to boost endurance using common methods, like write combining or effective garbage collection, etc, so it forces the drive to not operate, or boost endurance, as it would normally.

The workload spans the entire space of the drive, which again just doesn't happen in normal application. SNIA guidelines for client SSD testing recommend a 16GB span, which changes the nature of the workload, and the impact to endurance. And a 16GB span is still far too large, imo.

These two factors magnify the impact to endurance to unrealistic levels, but the use of a 4K random write workload is even more brutal. Think of it as taking a shotgun to the flash. If you measure endurance with a sequential workload under the same sub-optimal and unrealistic conditions, the flash provides anywhere from 5x to 12x more endurance.

Also, endurance is rated by data retention, which is a function of time. The criteria for data retention is that after the endurance rating has been exhausted, you must be able to read back the data after the drive has set unpowered for a year.

So, during normal use the drive is not full to the extreme, which leads to increased endurance because the drive can operate correctly and mitigate the impact to endurance, and the workload does not span the full LBA range, which leads to increased endurance. Heavy pure 4K random write workloads are as rare as they come in a consumer desktop environment (even in the enterprise they are extremely rare), so the actual amount of data you can write during a typical mix of sequential and random workloads isn't even in the vicinity of the official endurance rating. Even after all that, if the endurance is exhausted and you keep the drive powered on a semi-regular basis (say, powering it up every six months), the amount of time before you would have an issue with data retention is lengthened.

In short, during real world use the SSD can handle far far beyond this rated endurance spec.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
PAULALCORN said "...you must be able to read back the data after the drive has set unpowered for a year."

I built my current PC with a 1TB Black Edition WD around 7 years ago. So, that means there are files that have been sitting there for 7 years that have not degraded and are still 100% readable, or I would have serious problems. I would need an SSD to do the same in my next build. 1 year isn't a long enough retention period for, and given we aren't supposed to defrag SSDs I don't see what would cause some files to get re-written to another part of the drive.

Is there a weakness with SSDs for the lifetime of data that just sits there and is never updated?

Thanks!
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
876
394
19,360


You haven't powered the computer up for a year? As in, turned it on?

The statement is about the SSD not being powered for a year. Turned off.
 

richardvday

Distinguished
Sep 23, 2017
189
33
18,740
Yes I think your missing the part about NO POWER to the drive for a year.
If you power your system up every 11 months( if you leave it off that long cant be important can it ?) it should still work is what he is saying.
A HDD on the other hand can sit off for much longer and still retain your data.
Just power it up everytime DST kicks in every 6 months :)
 

unityole

Honorable
Mar 2, 2014
29
0
10,530
though I do like added power test during use not just idle, whats with the review tests seems all changed. we longer see seq read/write nor random read/write performance at QD1-4.

further more with TLC especially more so on QLC, I want to see the speed while saturated by write to see the real write performance of QLC flash. please include a HD tune test write test entire drive so we can see the dip in performance going from SLC caching being filled to real QLC performance.
 

How many people are actually going to write 100,000GB of data to any drive? Under most typical usage scenarios, the vast majority of people won't be writing more than 20GB of data to their system drive each day, and most common users will write far less than that, maybe averaging a few GB or so. Even at 20GB per day, which would actually be rather high, it would take more than 13 years to hit that amount of writes, so the suggestion of having to replace the drive more often is nonsensical. By that point, the drive should be long obsolete, and you'll likely be able to get much faster, higher capacity drives at a fraction of the cost. So, no, I don't see 100TBW endurance being a problem. To hit that amount of usage within the drive's 5 year warranty, you would need to write around 55GB of data to the drive each day, and that's not going to be something likely to happen outside of certain specialized professional tasks. Obviously, hitting the write endurance limit won't be covered by warranty, but it's not something likely to happen within the drive's warranty period either. And 100TBW is for the 512GB drive, not the 1TB drive reviewed here, which has double the endurance.

I think this drive looks great, offering much higher performance at SATA SSD prices. It's hard to justify the premium pricing for most NVME drives when they only perform marginally better than a SATA drive in most real-world tasks. Of course, I suspect this may cause other drives to drop in price to compensate. I would like to see the 512GB drive reviewed though, since that is still a much more popular capacity for SSDs. Judging by the official specs, it looks like performance may take a significant hit, but it will be interesting to see just how that plays out compared to the 1TB model and SATA SSDs in that price range.
 
Aug 8, 2018
2
0
10
This is way slower than my current 3x4 PCIe NVME SSD drive. Do not see the benefits when I paid 129 for this 500GB drive that matches the Samsung 970 in speed.
 

kultraleader

Prominent
Sep 8, 2017
2
0
510
Samsung 960 EVO 1TB is $300 that has two times faster read and write speed. and three times 300%+ more endurant. Price $199 VS $300. not much price difference but we lose 300%+ endurant !!! so Intel SSD 660p 1TB should be only worth $149, not $199. $199 is overpriced.
 
1) Are we going to lose enthusiast drives entirely? two, three, four bits per cell has always led to lower performance and lower endurance. But they are cheaper.

2) What are datacenters using as the flash market moves down the quality graph?
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
876
394
19,360


1.) I would assume that client drives would move to QLC instead of TLC over the coming year. They are already in plans for five-bit-per-cell flash, then the cycle will start again.

2.) data centers are currently adopting TLC SSDs en masse, but I think that QLC adoption won't be as robust. Instead it will likely be deployed in a limited number of applications.
 
Status
Not open for further replies.

Latest posts