Question Drive for MySQL server with continuous data stream(2.5" or M.2)

Status
Not open for further replies.

AtotehZ

Distinguished
Nov 23, 2008
403
13
18,815
Hello everyone.

I have a MySQL server that's running 24/7, always working.
Currently a 'Samsung 980 PRO SSD PCIe 4.0 NVMe M.2 - 1TB' is installed, but it was a very bad choice. It can't keep up.

It is installed in an Intel NUC I5-10210U.

I need something that isn't limited by a small cache or lackluster controller, also at least 1TB, preferably close to that.
Whether it's M.2 or 2.5" is all the same to me. Maybe 2.5" NVMe?

I'm currently researching it myself as well, but your help would be greatly appreciated.
 
Hello everyone.

I have a MySQL server that's running 24/7, always working.
Currently a 'Samsung 980 PRO SSD PCIe 4.0 NVMe M.2 - 1TB' is installed, but it was a very bad choice. It can't keep up.

It is installed in an Intel NUC I5-10210U.

I need something that isn't limited by a small cache or lackluster controller, also at least 1TB, preferably close to that.
Whether it's M.2 or 2.5" is all the same to me. Maybe 2.5" NVMe?

I'm currently researching it myself as well, but your help would be greatly appreciated.
If a 980 PRO can't "keep up" then I think you have other problems. Based on my history with database software, it would be insufficient RAM to the server.
 
If a 980 PRO can't "keep up" then I think you have other problems. Based on my history with database software, it would be insufficient RAM to the server.
Much of it is run on the drive for security reasons. If it stops for any reason, the machine it controls must be able to continue where it left off when restarted.

I can suggest RAM issues, but personally I think it's unlikely. It's more likely that the cache is spent and it becomes too slow after that. It is constantly writing to the drive.

In fact, there are people who suggest a harddrive would be better for that kind of continuous data stream being written. Again because the cache runs out and then the write speed takes a nosedive.
 
Much of it is run on the drive for security reasons. If it stops for any reason, the machine it controls must be able to continue where it left off when restarted.

I can suggest RAM issues, but personally I think it's unlikely. It's more likely that the cache is spent and it becomes too slow after that. It is constantly writing to the drive.

In fact, there are people who suggest a harddrive would be better for that kind of continuous data stream being written. Again because the cache runs out and then the write speed takes a nosedive.
Database software should use RAM as the primary cache. Cache on a disk should not be considered. The NUC has gigabit network, so a maximum of 100MB/s of input data is all that could be coming into that device. A 980 PRO can write 100MB/s without effort. So, IMO, there are other database problems. RAM allocation, bad indexing, badly written queries, insufficient CPU, but, I don't believe an NVMe SSD is the problem.
Do you know how to debug MySQL performance problems? Have you googled for performance tuning on MySQL ?
 
Database software should use RAM as the primary cache. Cache on a disk should not be considered. The NUC has gigabit network, so a maximum of 100MB/s of input data is all that could be coming into that device. A 980 PRO can write 100MB/s without effort. So, IMO, there are other database problems. RAM allocation, bad indexing, badly written queries, insufficient CPU, but, I don't believe an NVMe SSD is the problem.
Do you know how to debug MySQL performance problems? Have you googled for performance tuning on MySQL ?
I think I've caused a misunderstanding. MySQL is running as a service on the NUC, directly to the machines it controls. I'd still consider it a server, but obviously the distinction matters if you're counting on it being over the network.

I do not know an awful lot about MySQL myself, but I do know about hardware. That's the reason I'm asking for help.

A SATA disk has been tested and it half solved the problem. It didn't have the dips in speed, but it also couldn't transfer nearly as fast.

There are also 8 identical units, all controlling physical machines in a warehouse, that's why hardware problems are unlikely since they'd be found in only one of the machines. It's also why it must remember exactly where the machines are at all times. If anything happens, the MySQL tables must be fully up to date with where things are when they're restarted. Otherwise wrong things happen.

My guess would be that either a very fast SATA disk, the right NAND type or a much larger cache is what's needed.
 
Last edited:
I think I've caused a misunderstanding. MySQL is running as a service on the NUC, directly to the machines it controls. I'd still consider it a server, but obviously the distinction matters if you're counting on it being over the network.

I do not know an awful lot about MySQL myself, but I do know about hardware. That's the reason I'm asking for help.

A SATA disk has been tested and it half solved the problem. It didn't have the dips in speed, but it also couldn't transfer nearly as fast.

There are also 8 identical units, all controlling physical machines in a warehouse, that's why hardware problems are unlikely since they'd be found in only one of the machines.

My guess would be that either a very fast SATA disk, the right NAND type or a much larger cache is what's needed.
Look at this review of the 980 PRO -- https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/3 The "Whole drive sequential write" shows that the last 16GB Written in a full disk write was almost 2000MB/s. You are looking at the wrong hardware, IMO.
 
Look at this review of the 980 PRO -- https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/3 The "Whole drive sequential write" shows that the last 16GB Written in a full disk write was almost 2000MB/s. You are looking at the wrong hardware, IMO.
I was looking for an illustration showing what I was talking about, but found this instead:
FZpJPFZhk3y2CMUrevtoXc-970-80.png.webp


I'm not sure what to say right now. I need to know how much data they need moved... and why it slows down. Over 50GB/min after the 10th minute seems excessive.

I remember finding another image here on the site that showed almost every drive crawling to a halt after a couple of minute. Here it just shows down slightly.
 
I was looking for an illustration showing what I was talking about, but found this instead:
FZpJPFZhk3y2CMUrevtoXc-970-80.png.webp


I'm not sure what to say right now. I need to know how much data they need moved... and why it slows down. Over 50GB/min after the 10th minute seems excessive.

I remember finding another image here on the site that showed almost every drive crawling to a halt after a couple of minute. Here it just shows down slightly.
An SSD that is too FULL can crawl. That could be your problem. Is it filled beyond 80% ?
 
Concur with the above. You have a RAM problem, as in not enough. DB applications need copious quantities of RAM for maximum performance. From your description it sounds very much like the 64GB Max RAM supported in the NUC isn't sufficient. This is an application where 128-256GB at a minimum is required. Even more if your queries are poorly optimized. Your problem is most definitely not storage, unless it's reached ~80% full as indicated above. For the record, overall NUCs do not make for great DB hosts or servers for such.
 
What I've been told is that the updates to the indexes happen with microseconds between them and for some reason the 980 isn't suited to it. The max speed doesn't even seem to be the issue. I will look into it being the RAM, but it's not likely they'll acknowledge that as an issue.

Before I even started this thread I asked something to the affect "Can't all that be run through the RAM?" and the answer was "It can't due to reliability"
 
What I've been told is that the updates to the indexes happen with microseconds between them and for some reason the 980 isn't suited to it. The max speed doesn't even seem to be the issue. I will look into it being the RAM, but it's not likely they'll acknowledge that as an issue.

Before I even started this thread I asked something to the affect "Can't all that be run through the RAM?" and the answer was "It can't due to reliability"
Which is precisely why more RAM is needed. Updates coming in that fast need some place to buffer. The drives cache is absolutely NOT the place for this. Whoever told you RAM isn't the problem obviously doesn't know what they're talking about from a DB standpoint.
 
Before I even started this thread I asked something to the affect "Can't all that be run through the RAM?" and the answer was "It can't due to reliability"
I don't mean to be flippant, but the transactions ALL go through RAM no matter what. If there is concern about bit errors in RAM, then they have the wrong hardware because they don't have ECC. If they are worried about missing data because of a power issue, then they are missing a UPS piece.
I would like to ask you to start over by asking different questions. "What is the symptom?" "Can you reproduce the symptom with a simulator or can you capture the inputs that cause the problem?" -- You need a test case. Then you can START to debug it, or tune for it, or recommend a solution. Without a repeatable test case, you can't legitimately work a solution.
 
Status
Not open for further replies.