3D XPoint SSD Pictured, Performance And Endurance Revealed At FMS

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
When I've taught computer classes, I've always enjoyed using this admittedly crude analogy:

Visualize each CPU core as a radio frequency transmitter.

Each core can now change one 64-bit register at a rate of 4 Billion cycles per second:

Thus, 1 core @ 4 GHz x 64 bits per cycle = 256 Gigabits per second / 8 = 32 GB/sec

4 cores @ 32 GB/second = 128 GB/second issuing from one 4-core CPU

How many PCIe 3.0 lanes are needed to accommodate that data rate?

Answer: 130 lanes @ 8 GHz / 8.125 bits per byte

Now, add hyper-threading.

Also, upgrade to an 8-core CPU.

Modern CPUs are capable of easily overwhelming
downstream chipset capacity.

 


https://www.technologyreview.com/s/530186/the-ongoing-threat-of-cold-boot-attacks/

"The idea is to cut the power to the device and then immediately reboot it to a USB flash drive so that the operating system does not immediately overwrite the contents of the random access memory. Next, search the random access memory for sensitive material, download it and be gone."

Thanks!
 
I'm with you on the RocketRAID. When shopping MB's, one of my pet peeves is the meme, "If you plug something into feature A, that disables feature B & C." Give me a break, Intel. Just provide enough PCIe lanes to run everything a modern MB should have. I know I'm feeding a pet monkey here, but I can't seem to let it go and move forward. I understand that the CPU's and MB's I want will cost more. Maybe a lot more. But Intel is complaining that the desktop is dying and I say, " Nope. You're killing it by giving no one a reason to upgrade." As long as the new, shiny MB doesn't advance my experience, why would I buy it? The PC was invented to get the users out of the centralized computing loop. I'm old, so I remember the mainframe/terminal days very well, and not with any fondness.
 
I admit, in the past, Highpoint's stuff was not whizz-bang the latest/hottest/best documented/etc. etc.

Nevertheless, after I learned to work around the nasty pitfalls with their 2720SGL, I ended up buying 6 more, and they're all working GREAT.

That's why I wrote to them and suggested an NVMe RAID controller with the specs that I wanted.

Lo and behold, they have designed an NVMe RAID controller that meets those specs almost exactly, notably with the x16 PCIe 3.0 edge connector.

Maybe motherboard vendors will catch on and start offering 4 x U.2 ports with support for all modern RAID modes that somehow bypass or circumvent Intel's narrow DMI 3.0 link (don't hold your breath).

For myself, my loyalty to Highpoint has paid off, and my personal preference is to reward them for listening. Ask yourself this one question: just how many full tower and mid-tower chassis are there on this planet, as we speak?

And, of those, how many have empty 3.5 and 5.25" drive bays?

If M.2 SSDs are already overheating, and consequently throttling, I seem to recall that cooling 3.5" drive bays is a problem that was solved more than 10 years ago: 3.5-to-2.5" adapters are a dime a dozen.

THE SOLUTION: add-in HBAs plugged into empty x16 PCIe slots.

Why else, you ask? ANSWER: because the excellent PartitionWizard now offers a "Migrate OS" feature that works perfectly the first time.

Plug in the card / wire up the SSDs / Migrate OS / re-boot to BIOS / change BOOT device.

I just did that exact same sequence with 4 x SanDisk Extreme Pro SSDs in RAID-0, and my trusty workstation is now more fun than ever to work with all day.

(HINT: I didn't need to change the motherboard and re-install tons of software.)

 
This is funny: a taxi driver asked me one time: "How long does it take to build a PC?"
My answer: "4 hours to assemble the hardware and 1 week to install all the software."
 
http://www.cnexlabs.com/

"CNEX has teamed with NAND vendors and other industry leaders in Datacenter, Storage, and Networking, to develop a new generation of SSD controllers that offer host interface flexibility, from NVMe to LightNVM, with options for native fabric connectivity. Customers use CNEX SSD controllers to deliver differentiated SSD solutions for low-power and high-performance in M.2, U.2, HHHL, and other form-factors."
 
Agreed. Looking further out, as DRAM moves closer to the CPU, with technologies like HMC/HBM2, nonvolatile storage will be taking its place, in the DIMM slots. Then, bulk solid state storage will sit on PCIe. For a few, bulk storage will still mean a few 10+ TB HDDs sitting out on SATA 3.

Disagree. SSDs are already so fast that it's getting harder and harder for end-users to notice much difference, in day-to-day activities. Existing NVMe products already offer comparable sequential performance, and the random access that most people do benefits from OS-level DRAM-based caching.

As Paul said, the real win is going to come from re-architecting operating systems and software. When you can directly-access nonvolatile storage without going through the kernel, that will allow apps and services to fundamentally change how they use storage.
 
Using 3D XPoint as memory also produces a number of interesting challenges. 3D XPoint is persistent, so if someone removed a DIMM from a server, they could simply plug it into another server to retrieve the data.
x86 CPUs already support encrypted memory segments, for a while. The primary motivation was to prevent man-in-the-middle attacks to intercept copy-protected content. No reason you couldn't use it for other sensitive data, however.
 


Excellent observation!
 


Excellent observation!
 
Thanks. I have no idea whether/how it impacts performance, mind you. But I'm sure Intel could optimize it to perform as fast as anything that could be embedded in the DIMMs. Putting it in the CPU should be more economical than adding encryption to every nvDIMM.

The only down-side might be that if your CPU fried, the nvDIMMs would be unreadable. So, a lower tier of storage, with externally-managed keys, would still be essential. Or, they could add key management features to the CPU.

BTW, excellent article. As usual.
: )
 
(is the comments system borked yet again? I'm seeing multiple posts from people, etc.)

MRFS' comments remind me of my moaning way back about Intel restricting PCIe lanes on newer entry top-end CPUs. It's crazy that a 4820K can do things which a 5820K can't. I've posted often about the lack of I/O lanes from Intel's more recent CPUs and chipsets, the way enthusiast boards if anything seem to have gone backwards, with features such as power/reset mbd buttons disappearing from various pricing levels.

And I'm not buying all this nonsense about SLI/CF not being relevant. It doesn't match the simultaneous rise of VR. Just as SGI did 20 years ago, multiple GPUs could be used to generate each eyepoint separately; this may not need a GPU-to-GPU link, but it does mean decent support for lots of PCIe lanes.

The last chipset launch I was genuinely impressed by was X58. Ever since then, it's always felt like Intel is just dragging its heels, crippling the consumer tech while the XEONs get all the goodies. The 2 locked out cores in the 3930K was the first sign of this, and then later we had no updates for X79, etc.

With all this new storage tech flying about, while Intel dabbles in chipsets that haven't been given increased I/O, it just feels like the whole chipset/mbd space is getting kinda messy again. Buy this CPU, oh that means features X, Y and Z can't be used. No thanks.
 
Speaking of marketing ...
...I'm going to stick my neck WAAAAY OOUUUT here
and make the following suggestions to Intel,
knowing that they probably won't be reading this.

If I were Intel, with all of their mighty and
sophisticated manufacturing capabilities,
I would:

(A) ramp up production of modular Optane chips
which can be easily installed in 2.5" and 3.5" form factors
e.g. U.2 connections, as well as other form factors,
possibly also SATA and SAS connections too;

(B) "secretly" implement 2 key options enabled
via Option ROMs, jumpers, or other methods:

(i) pre-set transmission clocks e.g. 6G, 8G, 12G and 16G:
you KNOW that prosumers will want to try 16G ASAP!

(ii) 128b/130b jumbo frames already recognized
by the PCIe 3.0 standard;

(C) price the 2.5" version very aggressively,
in order to enlarge the installed base rapidly;
this approach recovers R&D with a large sales volume
and relatively small profit margin;

(D) THEN, not so "secretly", exploit User Forums
and Tech Support groups to LEAK the methods
for enabling the faster clock speeds -and-
the jumbo frames -- all of this UNofficially
(of course);

(E) step (D) above should excite the overclockers
around the world;

(F) even if this approach is a "loss leader"
financially speaking, the volume of user experiences
and sheer amount of prosumer experimentation
will give Intel great "word-of-mouth" publicity;

(G) and, the feedback from those prosumers
will help guide Intel's future decisions
concerning future Optane and chipset developments;

(H) OEM a modern compatible NVMe RAID controller that
also supports jumbo frames and pre-set clock speeds
e.g. 6G, 8G, 12G and 16G; and, do the same with
both SATA and SAS RAID controllers, e.g. like
Highpoint's latest;

(I) whenever users complain about the narrow lanes
of the DMI 3.0 link, refer them to (H) above;

(J) encourage prosumer experimentation with
NVMe RAID controller installs in the first x16 slot
on all modern motherboards e.g. work with mobo
manufacturers to enhance UEFI/BIOS subsystems
to make this happen smoothly;

(K) advocate JEDEC-style "settings" for
all future 2.5" and 3.5" NVMe SSDs;

(L) ensure that engineers continue to honor
the principles of Plug-and-Play as much as possible.

My 2 cents :)

MRFS
 


And then there was Zen.

 


And in the meantime, Intel should release a simple PCIe card with their own SATA3 chip, so the world can finally have a SATA3 PCIe card that actually works properly, instead of the junk we're currently stuck with from Marvell and ASMedia. Then we could upgrade older systems, including S775, AM2, etc., though of course that might discourage sales of new stuff, which is probably why they haven't done it...

Ian.

 



One of the round-about and added ways of proving what you just wrote is to ask a few key loaded questions:

(a) why has the SATA-III clock rate been STUCK at 6G for such a long time?
(compare how rapidly IT changes as a general rule -and- how many SSDs have hit that glass ceiling)

(b) why hasn't SATA been upgraded to "sync" with PCIe 3.0's 8G clock and 128b/130b jumbo frame?
(this would nearly double max throughput to 8G / 8.125 = almost 1GB/second: so, charge a premium;
prosumers will pay the premium)

(c) proof of the above concepts can be easily confirmed in the USB 3.1 spec: 10G + 128b/132b jumbo frames
(but I continue to read pundits who argue, without proof, that flexible cables cannot oscillate that fast:
sadly, they don't realize they are allowing their own untested theories to "morph" into facts)

(d) SAS is staring everyone in the face with a 12G clock, but it's still stuck with the 8b/10b legacy frame
(if PCIe 3.0 can do it, and if USB 3.1 can do it, 12G SAS could remove 20% of its protocol overhead overnite)

(e) one of THE MAJOR justifications for PCI-Express in the first place was to permit expansion cards
to enhance system functionality withOUT requiring new motherboards -- admittedly a GREAT concept
(so, why do Intel and Microsoft keep making changes that require entirely new motherboards?)
AMD, are you reading this?

(f) and, last but not least, my pet peeve for a minimum of 5 YEARS now, has been to ask why all the rage
with multi-GPU SLI and Crossfire video subsystems, all of which use x16 edge connectors, BUT
a modern NVMe RAID controller with support for modern RAID modes was officially announced
for the first time THIS WEEK??? (let's not tell too many people that Intel's DMI 3.0 link
has the EXACT SAME upstream bandwidth as a single M.2 NVMe SSD).


OK, I confess that many of the above are admittedly loaded questions.

But, sometimes it helps to bonk people over the head with their own overhead.


MRFS


 
In my mind, this and other storage class memory solutions are the most exciting stuff in tech today because they have the potential to actually change the way computing is handled. Its going to be a long road to get there. though.

If in ten years I can build a new PC that has a single solution for memory and storage, and there is software that actually takes advantage of it, I will be very, very happy.
 
They just did that to get another QPI link. It's not a conspiracy - it's what happens when they share a CPU socket across too many product lines. Clearly, they prioritized the scalability of their E7 servers above retaining so many PCIe lanes for their single-socket userbase.

Just sit tight and wait for Purley. Or Zen.
 


It's hardly "so many" when it was clearly a sensible amount for the initial X79 series and SB-E. Fact is, it's a step down, and now we have this total mess of people having to caveat what they're considering buying with whether not all sorts of storage and other options will be functional. 28 lane CPU, 40 lane CPU, it's ridiculous, especially when the number of cores has gone up. I don't buy the extra QPI link notion, it's far more likely it's just a way of spltting product lines to make more money, because there's no competition. How many times do I have to emphasise that toms' own reporters made clear at the time that the 3930K was an 8-core with 2 cores disabled, because Intel simply didn't need to release an 8-core? AMD couldn't even compete with Intel's 4-core of the day.

Intel is dragging its heels, and I'm weary of people, especially tech sites, making excuses for them. The coverage of all this sort of thing was a lot tougher 5 years ago. Blows my mind that anyone thinks the lane restriction of the entry BW-E is remotely acceptable or sensible. Just when storage tech needs more lanes, and CPUs gets more cores, we get less lanes from the entry top-end CPU models. It's nuts.

Fact is, there are scenarios where a 4820K would be more effective than entry BW-E setups for multi-GPU tasks because the 4820K does have 40 lanes.

I've ranted about this for ages, but my rant reserves are depleted. Glad to see MRFS is keeping the torch burning. 8) Just wish tech site reporters would do the same thing, not let Intel get away with it, and make it clear there's huge opportunity here for AMD if only they realise it.

Ian.

 
Why not? It's what they did.

This would only make sense if they had an even higher-end version which had the missing lanes. But. They. Don't.

Look, I'm no fan of their market segmentation tactics. I hated it when they relegated ECC to the Xeon product line. But that's just how it is, and your whining isn't going to change anything. If you want more PCIe lanes in a modern CPU, you can either buy dual-CPU Broadwell-E system or wait for Skylake-E.

You don't. You could just buy the Xeon version that has them enabled. Or, maybe you could lodge a complaint with the DoJ and try to force Intel to spin out its fabs into a separate company. It's unlikely to happen, at this point, but at least that would have a greater chance of making a difference than complaining to us.

They don't bleat on about it, because nobody wants to read that. And AMD has bigger problems to worry about. If they can get a competitive core to market, then they can worry about properly addressing the workstation market.

One reason I used Phenom II in my home server is the ridiculous amount of I/O it had. And ECC-support. And a chipset with 6 ports of SATA 3, way back in 2010/2011. And all for like 1/3rd the price of what a quad-core Xeon would've cost. I hope I can replace it with a comparable Zen-based CPU.
 


I think mapesdhs is referring to how some of the LGA 2011 i7s have 28 lanes while others have 40 despite being the same physical chips just because Intel disabled them to charge more for i7s with all 40 lanes (despite them being fully functional).
 


Exactly, point being Intel started off with a 4-core with 40 lanes (4820K) and then stepped down to a later CPU with 28 lanes (5820K). Since when in CPU history does a company deliberately make something worse like this? Its just bizarre that anyone can think it's remotely sensible to increase the no. of cores by 50% while reducing the I/O potential. It means a 4820X/X79 build can literally do things that a 5820K/X99 build can't, which is totally whacko. That's it's carried on and yet people continue to buy into such madness just shows how successful the marketing has been.

Sorry bit_user, but your reply is basically saying just give in to this stuff (and notions like moaning to the DoJ are pie in the sky). I took a different route, I bought older products on the used market that continue to hold significant value and productive potential. Sure, some/many will go ahead and buy Intel's newer crippled products (and I don't just mean the I/O lanes), but by doing so it's just encouraging even further moves in the same direction. One could say it circles back to lack of competition, which is probably true. Do you think what Intel did with its lane crippling after the 4820K is perfectly ok? If not, then criticising my pointing it out is rather odd. As for, "just buy the XEON version", that's a lot more expensive and doesn't have the oc potential; at 4.8, a 3930K gives similar threaded performance to a 10-core XEON from the same era, sans differences with ECC, etc. There are plenty of pro users who can't afford XEON builds, hence the appeal of oc'd top-end consumer setups, of which I've built quite a few, mostly 3930K, but also 3970X and 4960X (using P9X79/E WS and R4E).

Either way, the point is the tech has been going backwards recently, or stagnating, while prices have been sharply rising (the price of the 6950X is just nuts). We know from 30 years of multi-processing computing tech evolution that balanced performance is important. Intel's recent products are distorting this balance, adopting far more potent I/O technologies without an increase in base I/O to match, reducing entry I/O at the high-end right when emerging VR and GPU acceleration would be best poised to make greater use of them, and so on.

Well, whatever. Keep paying to see Transformers movies at the cinema and Bay will keep making them. Same thing really.

Ian.
 
I didn't think that's what he said. Is this true? Can you get LGA 2011 v3 Broadwells (either i7-E or Xeon), with 40 lanes?

BTW, "if you want more PCIe lanes in a modern CPU" was a carefully-worded phrase, meant to allow for the obvious possibility of buying older tech. Do what you want, Ian. My main point was that complaining to us won't do any good.

I'm still quite happy with my Sandybridge-E. I probably won't upgrade until both Skylake-X and the Zen-based workstation APUs are available.
 
Status
Not open for further replies.