News Dual NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance

Status
Not open for further replies.
I was about to applaud the fact that, after nearly 2 years of LGA1700 being on the market, here's something that actually justifies having a PCIe 5.0 x16 slot... until I got to this part:

"With eight drives installed, the card can speed up to 28 GB/s via a PCIe Gen 4 x16 interface and up to an even more impressive 55 GB/s with HighPoint's Cross-Sync RAID technology."

A quick web search indicates that "HighPoint's Cross-Sync RAID technology" involves using a pair of cards as a single device. So, it's misleading to tout this thing as delivering 55 GB/s, when you actually need 2 of them. If true, that makes the title factually inaccurate. Not just misleading, but outright wrong, because it clearly says one card reaches that speed:

"Dual-Slot NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance"

What I find a bit funny is that most people running such storage-intensive applications would be using a server chassis with a purpose-built backplane, anyhow. You'd probably never want this in a desktop PC, as it probably sounds like a jet engine.

Makes me wonder what the heck their market is, or what those people would be doing with it. If we're talking networked storage, you'd need 400 Gbps Ethernet to reach its theoretical limit.

Due to its server roots, the SSD7749E is anything but cheap, costing $1,499.
Compared to the cost of the drives, that's a mere down payment.

E.1S ... It is reportedly the successor of U.2
I think U.3 is more of a direct successor to U.2, but much of the market will probably move on to E.1S and E.1L. Then again, judging by what's available today, I see that while Micron has completely shifted to U.3, while Solidigm only offers U.2. Both offer E.1S and E.1L, of course.
 
Last edited:
I think PCI-E5 is way more expensive to implement compared to 4. And PCI-E 4 still offers more then enough bandwidth for all your sort of applications. 55GB a second is nice but what real workload do you use that for?
 
Read the headline and thought "that sounds like something Highpoint would do", then read the article and confirmed. Stuff like this is for server / workstation scenarios where you need insane disk capability. CAD/CAM, video rendering or modeling systems come to mine.
 
Using this form factor would bring some very interesting designs for NASes like the Synology ones
A NAS with 400 Gbps Ethernet? For what? Serving all the video ads to the entire US West Coast?
: D

The bandwidth numbers are just impossibly high, for it to make sense as networked storage. Whatever people do with these, it's not that.

Stuff like this is for server / workstation scenarios where you need insane disk capability. CAD/CAM, video rendering or modeling systems come to mine.
I know that's the standard answer for what people do with workstations, but I'm still having trouble wrapping my mind around what exactly any of those involves that requires quite so much raw bandwidth.

The only thing I can come up with is that you're using this for analyzing some scientific or GIS-type data set that won't fit in RAM.
 
I was about to applaud the fact that, after nearly 2 years of LGA1700 being on the market, here's something that actually justifies having a PCIe 5.0 x16 slot... until I got to this part:
"With eight drives installed, the card can speed up to 28 GB/s via a PCIe Gen 4 x16 interface and up to an even more impressive 55 GB/s with HighPoint's Cross-Sync RAID technology."​

A quick web search indicates that "HighPoint's Cross-Sync RAID technology" involves using a pair of cards as a single device. So, it's misleading to tout this thing as delivering 55 GB/s, when you actually need 2 of them. If true, that makes the title factually inaccurate. Not just misleading, but outright wrong, because it clearly says one card reaches that speed:
"Dual-Slot NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance"

What I find a bit funny is that most people running such storage-intensive applications would be using a server chassis with a purpose-built backplane, anyhow. You'd probably never want this in a desktop PC, as it probably sounds like a jet engine.

Makes me wonder what the heck their market is, or what those people would be doing with it. If we're talking networked storage, you'd need 400 Gbps Ethernet to reach its theoretical limit.


Compared to the cost of the drives, that's a mere down payment.


I think U.3 is more of a direct successor to U.2, but much of the market will probably move on to E.1S and E.1L. Then again, judging by what's available today, I see that while Micron has completely shifted to U.3, while Solidigm only offers U.2. Both offer E.1S and E.1L, of course.
I don't have this information myself, but they may be touting the throughput in one direction, since the communication is full duplex capable and may only be stating the throughput in the form of a "read" or "write" operation, and if that is the case, can truthfully say 55GB/s performance in a read/write max capability.

To continue, it also may be the person writing the article (cant confirm myself of course) may have not fully conveyed that Highpoint's proprietary Cross-Sync RAID allows for the use of 2 x16 slots to be utilized for even more throughput.

Just my guess...
 
I don't have this information myself, but they may be touting the throughput in one direction, since the communication is full duplex capable and may only be stating the throughput in the form of a "read" or "write" operation, and if that is the case, can truthfully say 55GB/s performance in a read/write max capability.
Instead of speculating, just look at the actual claim. The claim was that 55 GB/s required the use of HighPoint's Cross-Sync RAID technology, as I quoted. If you want to understand their claim, then look into that technology, what it requires, and how it works.

It doesn't help anyone to go making up hypothetical claims, based on hypothetical data, if that's not what they actually claimed. You don't know the architecture of the RAID controller, nor whether or how effectively it can operate in full-duplex mode. Any facts you conjure in your imagination might therefore turn out to be totally false, and therefore misleading.

We value real world data, which is why testing features so prominently into the actual reviews on this site. Even when we don't have the ability to test the product in question, a responsibility exists to understand & communicate, in as much detail as possible, what the manufacturer is actually claiming.
 
A NAS with 400 Gbps Ethernet?
Never said that, my friend.

I clearly stated firm factor due to the hardware size/configuration.

That said, i would also like to know how a nvme drive could replace a spinning drive based on storage, not speed. Meaning, do whatever necessary to make the drives cheaper than rust drives but matching or exceeding the capacity.
 
Never said that, my friend.
Yes, but to max the 28 GB/s transfer rate, that's what you'd need. The point being: this is massive overkill for a NAS application.

I clearly stated firm factor due to the hardware size/configuration.
A NAS would be better off using a purpose-built backplane. Better cooling, better accessibility, and you don't have the size/spacing constraints of this weird PCIe card.

That said, i would also like to know how a nvme drive could replace a spinning drive based on storage, not speed.
Easier to do at lower capacities than high ones.

Meaning, do whatever necessary to make the drives cheaper than rust drives but matching or exceeding the capacity.
I'm sure nobody ever thought of trying to make SSDs cheaper.
: P

We already know what makes a cheap SSD: go QLC and DRAM-less. Use an old, slower controller and cheap NAND. If you want to use those bargain basement SSDs in your NAS, don't let me stop you!
 
I am sad to see the overall negativity in these forums. I am used to technical sites being more helpful and friendly than the general Internet. I am not sure why there are so many people here who would rather say something isn't useful than figure out how it could be, and who want to tell you not to either bother trying than help people figure out a way to make something work.

This is product for example seems like it has very practical real world uses. If nothing else, has a market in the small business space. If, for example, you are trying to do massive 3-D modeling, generative CAD design, create a full length film of computer animation, fine tune an open source large language model, or use the emerging video AI to create a sequel to a movie when one or more of the original actors or actresses have died, create a new AI application, or several other possibilities - well this would be incredibly useful.

Do people really think that there is no current need? Put this in a modern motherboard and a good processor and set up raid 6. Use it as the fast storage buffer and build a custom tower a few dozen Western Digital Black 20 Terabyte old-school SATA hard drives that you have vibration dampening in and abundant fans pushing airflow. Add a modern Quadro or even a 4090. Stick a couple of 200Gbps network cards.

Now I have a GPU accelerated SQL Server adequate for my needs. Faster and less expensive than anything commercial. I just have to shell out for a modern motherboard and processor so that I can get the 128 lanes of PCIe 4 so that the backplane is adequate.

I tried to ask people in a different place on this website for advice about how to do this and got "don't even try" as an answer.

For me, since the SSD's are going to be in really heavy use, I expect to lose them over time. So even if 55 G is just an on card speed, if it helps with RAID array rebuild speed, this is still very useful.
 
To clarify, by modern motherboard and processor I mean a current production Xeon or Threadripper system. I am basically planning a server room, with custom designed and built equipment. I would love for what I need to be cheap enough to just get pre built rack mount, but have you looked at the prices? Besides, I don't really want a separate NAS with a high speed connection - unless someone is willing to take the time to explain some way that this won't introduce additional latency compared to having the storage right on the database server.

There really are uses for a personal cloud. It can be a lot faster and cheaper than trying to connect over the Internet to one of the big providers. And yes, I understand that this card is not going to be maxed out unless I have extremely high speed connections. But part of my purpose is to preprocess a bit before data even leaves the server.
But I am sure that I am not the only person who needs this. 200 or even 400 Gbps can get saturated easily when you have 25 to 100 workstations connected at 25Gps. Even for less intense 10 Gbps an office full of people could max out a file server at these speeds - and 10Gbps is probably a minimum these days for a company doing media creation, mechanical engineering, etc.

If you are paying people with multiple engineering degrees a couple hundred thousand a year, you don't want to be paying them to wait on the network.
 
I am sad to see the overall negativity in these forums. I am used to technical sites being more helpful and friendly than the general Internet. I am not sure why there are so many people here who would rather say something isn't useful than figure out how it could be, and who want to tell you not to either bother trying than help people figure out a way to make something work.

This is product for example seems like it has very practical real world uses. If nothing else, has a market in the small business space. If, for example, you are trying to do massive 3-D modeling, generative CAD design, create a full length film of computer animation, fine tune an open source large language model, or use the emerging video AI to create a sequel to a movie when one or more of the original actors or actresses have died, create a new AI application, or several other possibilities - well this would be incredibly useful.

Do people really think that there is no current need? Put this in a modern motherboard and a good processor and set up raid 6. Use it as the fast storage buffer and build a custom tower a few dozen Western Digital Black 20 Terabyte old-school SATA hard drives that you have vibration dampening in and abundant fans pushing airflow. Add a modern Quadro or even a 4090. Stick a couple of 200Gbps network cards.

Now I have a GPU accelerated SQL Server adequate for my needs. Faster and less expensive than anything commercial. I just have to shell out for a modern motherboard and processor so that I can get the 128 lanes of PCIe 4 so that the backplane is adequate.

I tried to ask people in a different place on this website for advice about how to do this and got "don't even try" as an answer.

For me, since the SSD's are going to be in really heavy use, I expect to lose them over time. So even if 55 G is just an on card speed, if it helps with RAID array rebuild speed, this is still very useful.
For the vast majority of people who frequent this forum, Toms Hardware....that uber speed is very much not needed or noticeable.

It does not benefit gameplay, nor watching videos, nor general computer use.

Moving from spinning HDD to solid state was a HUGE deal.
The various flavors of SSD? Not so much.

In a movie production house, when moving big data between systems and/or the backend?
Absolutely.

My nightly backups of my house systems, to my NAS?
Not even a little bit.


There are absolutely use cases for faster and faster.
But if you're speccing out a network for 50 workstations, a big SQL DB, and multigig network...then you are far out on the fringe for this forum.
 
  • Like
Reactions: bit_user
This is product for example seems like it has very practical real world uses. If nothing else, has a market in the small business space. If, for example, you are trying to do massive 3-D modeling, generative CAD design, create a full length film of computer animation, fine tune an open source large language model, or use the emerging video AI to create a sequel to a movie when one or more of the original actors or actresses have died, create a new AI application, or several other possibilities - well this would be incredibly useful.
Got specifics? Tell me how big are these datasets and what kind of access rate/pattern they have which couldn't be fulfilled by a single NVMe drive. So far, all we've gotten are the typical platitudes about workstation applications.

Do people really think that there is no current need?
Nobody said that. I think we just have questions. I'm receptive to answers reinforced by more than marketing speak.

Put this in a modern motherboard and a good processor and set up raid 6. Use it as the fast storage buffer and build a custom tower a few dozen Western Digital Black 20 Terabyte old-school SATA hard drives that you have vibration dampening in and abundant fans pushing airflow. Add a modern Quadro or even a 4090. Stick a couple of 200Gbps network cards.

Now I have a GPU accelerated SQL Server adequate for my needs.
Tell me more. What kind of database is that big and has such high query-processing demands? And who, with such a database and hardware budget, wouldn't use a server with a standard hotswap backplane?

Furthermore, do storage buffers make a lot of sense for such databases? And, if the SSDs are merely a storage buffer, then why not just go RAID-5?

For me, since the SSD's are going to be in really heavy use, I expect to lose them over time. So even if 55 G is just an on card speed, if it helps with RAID array rebuild speed, this is still very useful.
Is 55 GB/s an on-card speed? That was never spelled out. Feel free to enlighten us.
 
If anyone has suggestions for a motherboard that actually offersa full 16 lanes PCIe 3 or higher actually usable on more than 4 expansion slots at once, I would be grateful. Modern high end Threadrippers support 128 lanes of PCIe 4 and there are Xeons that offer even more. But I am having some difficulty finding reasonably priced motherboards that actually make it available. At the moment I am just working on a smaller scale proof of concept using one of Asus's old X299 boards. There Sage line makes up to 4 PCIe3x16 slots available. But it is only PCIe 3, and they use Skylake and Cascade Lake i9 processes with 44 or 48 lanes (up to 18 cores and 36 threads) and a modified chip set with 2 PCIe controllers instead of one to make this work if I understand correctly. So it's a good, less expensive option for proof of concept. I may be able to squeeze out the performance I need for a production system. But it would be a lot better to be able to migrate to a system that will let me have everything - 50+ terabytes of raid 6 SSDs, over a petabyte of hard disk storage, at least one high end GPU, at least 200 Gbps network connectivity and around 50 threads at about 3 GHZ or more. Ideally without paying more for just the motherboard than I did for my used car:)
 
For the vast majority of people who frequent this forum, Toms Hardware....that uber speed is very much not needed or noticeable.

It does not benefit gameplay, nor watching videos, nor general computer use.

Moving from spinning HDD to solid state was a HUGE deal.
The various flavors of SSD? Not so much.

In a movie production house, when moving big data between systems and/or the backend?
Absolutely.

My nightly backups of my house systems, to my NAS?
Not even a little bit.


There are absolutely use cases for faster and faster.
But if you're speccing out a network for 50 workstations, a big SQL DB, and multigig network...then you are far out on the fringe for this forum.
Ok, fair. Is there a different place that you would recommend looking? Serve the Home has some useful info. But I am having a hard time finding information about the kind of use cases I have described. I am getting back into more technical work after a side trip of a couple decades in the area I went to college for (biology). So I remember a time before cloud everything, and I have read the articles explaining that the cloud is usually a more expensive and sometimes less performing option. But everyone seems to think that you just buy off the shelf rack mount equipment. I don't see where the numbers justify the expense. I have the experience to build a properly cooled and vibration isolated file server, data base server, etc. But I know that I don't know even close to everything.

So if you have a suggestion for a better forum, I would be grateful.
 
But I am sure that I am not the only person who needs this. 200 or even 400 Gbps can get saturated easily when you have 25 to 100 workstations connected at 25Gps.
Who is doing that and why? The only examples that come to mind might be video editors, in a major news or sports network, locally editing clips of 4k content from a centralized SAN. The storage demands of such a setup are going to far outstrip bandwidth, as a driving factor. I think caching/buffering on the server won't help much, either.

Even for less intense 10 Gbps an office full of people could max out a file server at these speeds - and 10Gbps is probably a minimum these days for a company doing media creation, mechanical engineering, etc.
Most network traffic tends to be extremely bursty. Thus, having 10 people connected @ 10 Gbps doesn't mean you need 100 Gbps on your server. That would probably take at least 100 users or more, depending on what they're doing. You can serve 10 Gbps without any kind of exotic storage - even a moderately big RAID of HDDs could sustain that speed, if the accesses were mostly serial.

If you are paying people with multiple engineering degrees a couple hundred thousand a year, you don't want to be paying them to wait on the network.
If down time is that expensive, then I'd probably pay the $ for a server from a big OEM + on site support and use the supported OS and software they recommend.

At my job, they don't care too much, because the IT department is decoupled from engineering. They give us Dell Precision client machines (laptops or fixed workstations) and we use Github, Microsoft OneDrive, and Atlassian (Jira, Confluence, etc.) for our main storage needs. I'm sure there are some groups with more bespoke additional infrastructure, if they have more exotic needs which justify it. If a client machine fails or the IT department messes up VPN access to our testing lab or Git LFS server, then the engineering department just eats the cost of the downtime.
 
Last edited:
Ok, fair. Is there a different place that you would recommend looking? Serve the Home has some useful info. But I am having a hard time finding information about the kind of use cases I have described. I am getting back into more technical work after a side trip of a couple decades in the area I went to college for (biology). So I remember a time before cloud everything, and I have read the articles explaining that the cloud is usually a more expensive and sometimes less performing option. But everyone seems to think that you just buy off the shelf rack mount equipment. I don't see where the numbers justify the expense. I have the experience to build a properly cooled and vibration isolated file server, data base server, etc. But I know that I don't know even close to everything.

So if you have a suggestion for a better forum, I would be grateful.
For off the rack stuff, the "expense" is often justified by the support.
That is what you are paying for.

And I say this as a former admin, currently sr dev, for a user base of 150K+.
 
  • Like
Reactions: bit_user
If anyone has suggestions for a motherboard that actually offersa full 16 lanes PCIe 3 or higher actually usable on more than 4 expansion slots at once, I would be grateful. Modern high end Threadrippers support 128 lanes of PCIe 4 and there are Xeons that offer even more. But I am having some difficulty finding reasonably priced motherboards that actually make it available.
If you're looking for specific recommendations, then I'd suggest opening a new thread in the appropriate forum, which I think would be one of:

When you post, do specify a budget and any other constraints you're working under. I think it's okay of you drop a link here, so we can follow you to the new thread.
 
Got specifics? Tell me how big are these datasets and what kind of access rate/pattern they have which couldn't be fulfilled by a single NVMe drive. So far, all we've gotten are the typical platitudes about workstation applications.


Nobody said that. I think we just have questions. I'm receptive to answers reinforced by more than marketing speak.


Tell me more. What kind of database is that big and has such high query-processing demands? And who, with such a database and hardware budget, wouldn't use a server with a standard hotswap backplane?

Furthermore, do storage buffers make a lot of sense for such databases? And, if the SSDs are merely a storage buffer, then why not just go RAID-5?


Is 55 GB/s an on-card speed? That was never spelled out. Feel free to enlighten us.
So I am personally interested. I am working on several different projects. For myself and clients. I can't go into excessive detail, but I think that I can explain enough that it should make sense.

One of the reasons that a number of the AI models out there are so resource intensive is that for some of them they are (more or less completely for some) written in Python. Since it's a scripted language it's easier to make changes without waiting for huge files to recompile. But it is also slow to run. Once you finally have something working you can compile it or migrate to a different language (usually, it's not straight forward). Then the base code is frozen and not easy to tweak. But this whole situation is an awkward compromise. I am part of a project to try to develop a different approach where as much as possible is handled by a database and precompiled functions, just connected with bits of easily changed script.
To some degree things already work this way with AI programs using some sort of database and Python libraries, but it looks like there is room for optimization. An easy but misleading example comes to mind. Microsoft was attempting to provide this sort of system with the ability to link and automate Office functions with VB Script, but unfortunately Office is not exactly efficient itself. Which makes it hard to see the potential in doing something like this well.
To do just this project effectively, you need to be able to quickly load different models and data sets and work on them. We are talking about a lot of data. The better implementations for llama, the permissive use (not exactly open source) large language model from Meta and others based on it use around 65 - 70 billion tokens, which tends to work out to somewhere around 70 to 80 Gigabytes. For each different training data set.
Trying to redesign so that training can be performed on an office network with centrally stored data, speed is important. Especially since if I want to finish in a few years, I need to be able to test multiple designs, each on multiple data sets, at the same time.

A client is trying to use iterative design to produce a new aircraft design for DARPA. Obviously I am not going into detail. But it is a vast amount of modeling. And they need to run it in house for security reasons. Small company, it will be a breakthrough for them if they can make something compelling enough that they get the contract.

Another wants to simulate chemical plant designs for some complicated multi step process and would prefer to do this in house if they can.

This is not even all of what I have, personally.

Some gets even more unusual, but it is mainly people who would like to bring industrial simulation out of the cloud for cost, security (mainly from industrial espionage) or because of latency.

By the way - doesn't the article specifically say that that the (admittedly advertising speak) is between drives on the card using a PCIe switch? - the only way that the math works is as a cumulative number since I don't think that a single 4 lane PCIe card supports this (?). But since I desperately need fast read, with write being less important, and I fully expect to kill an SSD in the middle of a long simulation I have running over the weekend, or while I am out of town. So yes, being able to expect that things keep working even if not optimally without babysitting is good. Hence the excessive raid.
 
Who is doing that and why? The only examples that come to mind might be video editors, in a major news or sports network, locally editing clips of 4k content from a centralized SAN. The storage demands of such a setup are going to far outstrip bandwidth, as a driving factor. I think caching/buffering on the server won't help much, either.


Most network traffic tends to be extremely bursty. Thus, having 10 people connected @ 10 Gbps doesn't mean you need 100 Gbps on your server. That would probably take at least 100 users or more, depending on what they're doing. You can serve 10 Gbps without any kind of exotic storage - even a moderately big RAID of HDDs could sustain that speed, if the accesses were mostly serial.


If down time is that expensive, then I'd probably pay the $ for a server from a big OEM + on site support and use the supported OS and software they recommend.

At my job, they don't care too much, because the IT department is decoupled from engineering. They give us Dell Precision client machines (laptops or fixed workstations) and we use Github, Microsoft OneDrive, and Atlassian (Jira, Confluence, etc.) for our main storage needs. I'm sure there are some groups with more bespoke additional infrastructure, if they have more exotic needs which justify it. If a client machine fails or the IT department messes up VPN access to our testing lab or Git LFS server, then the engineering department just eats the cost of the downtime.
I don't know how to say this without sounding like a jerk, but would you pay the extra money, or would a company that you are working for and who's management doesn't even know that there are cheaper options? I am not a big company, and I don't have any clients with money to spare (yet, it would be nice). I have a small consulting company. But just because something isn't easy does not mean that it can't be done. Taking advantage of niche opportunities and avoiding seemingly minor inefficiencies is one of the ways that you can start a new business and compete against larger, more established companies. Because you know every part of it, squeeze pennies, and think very far outside of the box. It's not how you keep running things once you have tens of millions in assets, but it is one way to get started.
 
For off the rack stuff, the "expense" is often justified by the support.
That is what you are paying for.

And I say this as a former admin, currently sr dev, for a user base of 150K+.
Very true. But most companies are not going to want to support these situations anyway. One client is developing not just a new PCIe card, but ASICs to go on it. If you know a server maker where that does not void the warranty, I would love to know. Eventually, of course, it's going off to UL before it is marked. But design (and possibly magic smoke escaping, much as no one wants it) comes first.
 
Status
Not open for further replies.