HighPoint has created the world's first dual-slot NVMe add-in card.
Dual NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance : Read more
Dual NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Compared to the cost of the drives, that's a mere down payment.Due to its server roots, the SSD7749E is anything but cheap, costing $1,499.
I think U.3 is more of a direct successor to U.2, but much of the market will probably move on to E.1S and E.1L. Then again, judging by what's available today, I see that while Micron has completely shifted to U.3, while Solidigm only offers U.2. Both offer E.1S and E.1L, of course.E.1S ... It is reportedly the successor of U.2
If you can’t imagine a use case, it doesn’t mean that one doesn’t exist…And PCI-E 4 still offers more then enough bandwidth for all your sort of applications. 55GB a second is nice but what real workload do you use that for?
A NAS with 400 Gbps Ethernet? For what? Serving all the video ads to the entire US West Coast?Using this form factor would bring some very interesting designs for NASes like the Synology ones
I know that's the standard answer for what people do with workstations, but I'm still having trouble wrapping my mind around what exactly any of those involves that requires quite so much raw bandwidth.Stuff like this is for server / workstation scenarios where you need insane disk capability. CAD/CAM, video rendering or modeling systems come to mine.
A NAS with 400 Gbps Ethernet? For what? Serving all the video ads to the entire US West Coast?
I don't have this information myself, but they may be touting the throughput in one direction, since the communication is full duplex capable and may only be stating the throughput in the form of a "read" or "write" operation, and if that is the case, can truthfully say 55GB/s performance in a read/write max capability.I was about to applaud the fact that, after nearly 2 years of LGA1700 being on the market, here's something that actually justifies having a PCIe 5.0 x16 slot... until I got to this part:
"With eight drives installed, the card can speed up to 28 GB/s via a PCIe Gen 4 x16 interface and up to an even more impressive 55 GB/s with HighPoint's Cross-Sync RAID technology."
A quick web search indicates that "HighPoint's Cross-Sync RAID technology" involves using a pair of cards as a single device. So, it's misleading to tout this thing as delivering 55 GB/s, when you actually need 2 of them. If true, that makes the title factually inaccurate. Not just misleading, but outright wrong, because it clearly says one card reaches that speed:
"Dual-Slot NVMe Card With Eight E1.S SSDs Achieves 55 GB/s Performance"
What I find a bit funny is that most people running such storage-intensive applications would be using a server chassis with a purpose-built backplane, anyhow. You'd probably never want this in a desktop PC, as it probably sounds like a jet engine.
Makes me wonder what the heck their market is, or what those people would be doing with it. If we're talking networked storage, you'd need 400 Gbps Ethernet to reach its theoretical limit.
Compared to the cost of the drives, that's a mere down payment.
I think U.3 is more of a direct successor to U.2, but much of the market will probably move on to E.1S and E.1L. Then again, judging by what's available today, I see that while Micron has completely shifted to U.3, while Solidigm only offers U.2. Both offer E.1S and E.1L, of course.
Instead of speculating, just look at the actual claim. The claim was that 55 GB/s required the use of HighPoint's Cross-Sync RAID technology, as I quoted. If you want to understand their claim, then look into that technology, what it requires, and how it works.I don't have this information myself, but they may be touting the throughput in one direction, since the communication is full duplex capable and may only be stating the throughput in the form of a "read" or "write" operation, and if that is the case, can truthfully say 55GB/s performance in a read/write max capability.
Never said that, my friend.A NAS with 400 Gbps Ethernet?
Yes, but to max the 28 GB/s transfer rate, that's what you'd need. The point being: this is massive overkill for a NAS application.Never said that, my friend.
A NAS would be better off using a purpose-built backplane. Better cooling, better accessibility, and you don't have the size/spacing constraints of this weird PCIe card.I clearly stated firm factor due to the hardware size/configuration.
Easier to do at lower capacities than high ones.That said, i would also like to know how a nvme drive could replace a spinning drive based on storage, not speed.
I'm sure nobody ever thought of trying to make SSDs cheaper.Meaning, do whatever necessary to make the drives cheaper than rust drives but matching or exceeding the capacity.
For the vast majority of people who frequent this forum, Toms Hardware....that uber speed is very much not needed or noticeable.I am sad to see the overall negativity in these forums. I am used to technical sites being more helpful and friendly than the general Internet. I am not sure why there are so many people here who would rather say something isn't useful than figure out how it could be, and who want to tell you not to either bother trying than help people figure out a way to make something work.
This is product for example seems like it has very practical real world uses. If nothing else, has a market in the small business space. If, for example, you are trying to do massive 3-D modeling, generative CAD design, create a full length film of computer animation, fine tune an open source large language model, or use the emerging video AI to create a sequel to a movie when one or more of the original actors or actresses have died, create a new AI application, or several other possibilities - well this would be incredibly useful.
Do people really think that there is no current need? Put this in a modern motherboard and a good processor and set up raid 6. Use it as the fast storage buffer and build a custom tower a few dozen Western Digital Black 20 Terabyte old-school SATA hard drives that you have vibration dampening in and abundant fans pushing airflow. Add a modern Quadro or even a 4090. Stick a couple of 200Gbps network cards.
Now I have a GPU accelerated SQL Server adequate for my needs. Faster and less expensive than anything commercial. I just have to shell out for a modern motherboard and processor so that I can get the 128 lanes of PCIe 4 so that the backplane is adequate.
I tried to ask people in a different place on this website for advice about how to do this and got "don't even try" as an answer.
For me, since the SSD's are going to be in really heavy use, I expect to lose them over time. So even if 55 G is just an on card speed, if it helps with RAID array rebuild speed, this is still very useful.
Got specifics? Tell me how big are these datasets and what kind of access rate/pattern they have which couldn't be fulfilled by a single NVMe drive. So far, all we've gotten are the typical platitudes about workstation applications.This is product for example seems like it has very practical real world uses. If nothing else, has a market in the small business space. If, for example, you are trying to do massive 3-D modeling, generative CAD design, create a full length film of computer animation, fine tune an open source large language model, or use the emerging video AI to create a sequel to a movie when one or more of the original actors or actresses have died, create a new AI application, or several other possibilities - well this would be incredibly useful.
Nobody said that. I think we just have questions. I'm receptive to answers reinforced by more than marketing speak.Do people really think that there is no current need?
Tell me more. What kind of database is that big and has such high query-processing demands? And who, with such a database and hardware budget, wouldn't use a server with a standard hotswap backplane?Put this in a modern motherboard and a good processor and set up raid 6. Use it as the fast storage buffer and build a custom tower a few dozen Western Digital Black 20 Terabyte old-school SATA hard drives that you have vibration dampening in and abundant fans pushing airflow. Add a modern Quadro or even a 4090. Stick a couple of 200Gbps network cards.
Now I have a GPU accelerated SQL Server adequate for my needs.
Is 55 GB/s an on-card speed? That was never spelled out. Feel free to enlighten us.For me, since the SSD's are going to be in really heavy use, I expect to lose them over time. So even if 55 G is just an on card speed, if it helps with RAID array rebuild speed, this is still very useful.
Ok, fair. Is there a different place that you would recommend looking? Serve the Home has some useful info. But I am having a hard time finding information about the kind of use cases I have described. I am getting back into more technical work after a side trip of a couple decades in the area I went to college for (biology). So I remember a time before cloud everything, and I have read the articles explaining that the cloud is usually a more expensive and sometimes less performing option. But everyone seems to think that you just buy off the shelf rack mount equipment. I don't see where the numbers justify the expense. I have the experience to build a properly cooled and vibration isolated file server, data base server, etc. But I know that I don't know even close to everything.For the vast majority of people who frequent this forum, Toms Hardware....that uber speed is very much not needed or noticeable.
It does not benefit gameplay, nor watching videos, nor general computer use.
Moving from spinning HDD to solid state was a HUGE deal.
The various flavors of SSD? Not so much.
In a movie production house, when moving big data between systems and/or the backend?
Absolutely.
My nightly backups of my house systems, to my NAS?
Not even a little bit.
There are absolutely use cases for faster and faster.
But if you're speccing out a network for 50 workstations, a big SQL DB, and multigig network...then you are far out on the fringe for this forum.
Who is doing that and why? The only examples that come to mind might be video editors, in a major news or sports network, locally editing clips of 4k content from a centralized SAN. The storage demands of such a setup are going to far outstrip bandwidth, as a driving factor. I think caching/buffering on the server won't help much, either.But I am sure that I am not the only person who needs this. 200 or even 400 Gbps can get saturated easily when you have 25 to 100 workstations connected at 25Gps.
Most network traffic tends to be extremely bursty. Thus, having 10 people connected @ 10 Gbps doesn't mean you need 100 Gbps on your server. That would probably take at least 100 users or more, depending on what they're doing. You can serve 10 Gbps without any kind of exotic storage - even a moderately big RAID of HDDs could sustain that speed, if the accesses were mostly serial.Even for less intense 10 Gbps an office full of people could max out a file server at these speeds - and 10Gbps is probably a minimum these days for a company doing media creation, mechanical engineering, etc.
If down time is that expensive, then I'd probably pay the $ for a server from a big OEM + on site support and use the supported OS and software they recommend.If you are paying people with multiple engineering degrees a couple hundred thousand a year, you don't want to be paying them to wait on the network.
For off the rack stuff, the "expense" is often justified by the support.Ok, fair. Is there a different place that you would recommend looking? Serve the Home has some useful info. But I am having a hard time finding information about the kind of use cases I have described. I am getting back into more technical work after a side trip of a couple decades in the area I went to college for (biology). So I remember a time before cloud everything, and I have read the articles explaining that the cloud is usually a more expensive and sometimes less performing option. But everyone seems to think that you just buy off the shelf rack mount equipment. I don't see where the numbers justify the expense. I have the experience to build a properly cooled and vibration isolated file server, data base server, etc. But I know that I don't know even close to everything.
So if you have a suggestion for a better forum, I would be grateful.
If you're looking for specific recommendations, then I'd suggest opening a new thread in the appropriate forum, which I think would be one of:If anyone has suggestions for a motherboard that actually offersa full 16 lanes PCIe 3 or higher actually usable on more than 4 expansion slots at once, I would be grateful. Modern high end Threadrippers support 128 lanes of PCIe 4 and there are Xeons that offer even more. But I am having some difficulty finding reasonably priced motherboards that actually make it available.
So I am personally interested. I am working on several different projects. For myself and clients. I can't go into excessive detail, but I think that I can explain enough that it should make sense.Got specifics? Tell me how big are these datasets and what kind of access rate/pattern they have which couldn't be fulfilled by a single NVMe drive. So far, all we've gotten are the typical platitudes about workstation applications.
Nobody said that. I think we just have questions. I'm receptive to answers reinforced by more than marketing speak.
Tell me more. What kind of database is that big and has such high query-processing demands? And who, with such a database and hardware budget, wouldn't use a server with a standard hotswap backplane?
Furthermore, do storage buffers make a lot of sense for such databases? And, if the SSDs are merely a storage buffer, then why not just go RAID-5?
Is 55 GB/s an on-card speed? That was never spelled out. Feel free to enlighten us.
I don't know how to say this without sounding like a jerk, but would you pay the extra money, or would a company that you are working for and who's management doesn't even know that there are cheaper options? I am not a big company, and I don't have any clients with money to spare (yet, it would be nice). I have a small consulting company. But just because something isn't easy does not mean that it can't be done. Taking advantage of niche opportunities and avoiding seemingly minor inefficiencies is one of the ways that you can start a new business and compete against larger, more established companies. Because you know every part of it, squeeze pennies, and think very far outside of the box. It's not how you keep running things once you have tens of millions in assets, but it is one way to get started.Who is doing that and why? The only examples that come to mind might be video editors, in a major news or sports network, locally editing clips of 4k content from a centralized SAN. The storage demands of such a setup are going to far outstrip bandwidth, as a driving factor. I think caching/buffering on the server won't help much, either.
Most network traffic tends to be extremely bursty. Thus, having 10 people connected @ 10 Gbps doesn't mean you need 100 Gbps on your server. That would probably take at least 100 users or more, depending on what they're doing. You can serve 10 Gbps without any kind of exotic storage - even a moderately big RAID of HDDs could sustain that speed, if the accesses were mostly serial.
If down time is that expensive, then I'd probably pay the $ for a server from a big OEM + on site support and use the supported OS and software they recommend.
At my job, they don't care too much, because the IT department is decoupled from engineering. They give us Dell Precision client machines (laptops or fixed workstations) and we use Github, Microsoft OneDrive, and Atlassian (Jira, Confluence, etc.) for our main storage needs. I'm sure there are some groups with more bespoke additional infrastructure, if they have more exotic needs which justify it. If a client machine fails or the IT department messes up VPN access to our testing lab or Git LFS server, then the engineering department just eats the cost of the downtime.
Very true. But most companies are not going to want to support these situations anyway. One client is developing not just a new PCIe card, but ASICs to go on it. If you know a server maker where that does not void the warranty, I would love to know. Eventually, of course, it's going off to UL before it is marked. But design (and possibly magic smoke escaping, much as no one wants it) comes first.For off the rack stuff, the "expense" is often justified by the support.
That is what you are paying for.
And I say this as a former admin, currently sr dev, for a user base of 150K+.