Question 10 Gbe NAS Build (Synology): Needs to auto-transfer from SSD array to connected HDD array for select data not accessed for a certain time

Eggz

Distinguished
Hi Fellow Storage Nerdz,

I am looking get Synology DS1621xs+ and hope someone here can tell me if my intended use case is doable. And if so, how?

I'm trying to make is a fast, responsive NAS with lots of storage that can intelligently auto-allocate data between a primary SSD array and a separate (but connected) HDD array (i.e. max speed, max capacity, little to no intervention needed).

Clients will only see a network drive that can be used just like a local drive when on the network. Behind that will be an all-SATA-SSD NAS unit with nvme cashing (read and write), and an array of HDDs will be connected to the main NAS unit.

To pull this off, I am looking into getting a Synology DS1621xs+ with 32GB (Crucial 32 Kit) using both of the two (2) NVME cache slots (Samsung 980 Pro) and six (6) 2.5" SSDs (Samsung 870 EVO 8TB), as well as connecting a DX517 with five (5) 3.5" HDDs (Segate Exos X18 18TB). My hope is that the DS1621xs+ has a way to define a set of data that will automatically transfer from the SSDs on the DS1621xs+ to the HDDs on the connected DX517. This would apply to files in the data set that remain unaccessed for longer than X time. There would also need to be exceptions as needed.

The goal is to have everything clients load land on the SSDs (DS1621xs+), which then keeps only certain programs and frequently-accessed data benefitting from SSD performance; everything else will eventually drop back to the HDDs (DX517).

Is there a way to do this with the Synology units I'm looking at? If not, what other recommendations? Please help.

Thanks

-Eggz
 
Last edited:
Hi Fellow Storage Nerdz,

I am looking get Synology DS1621xs+ and hope someone here can tell me if my intended use case is doable. And if so, how?

I'm trying to make is a fast, responsive NAS with lots of storage that can intelligently auto-allocate data between a primary SSD array and a separate (but connected) HDD array (i.e. max speed, max capacity, little to no intervention needed).

Clients will only see a network drive that can be used just like a local drive when on the network. Behind that will be an all-SATA-SSD NAS unit with nvme cashing (read and write), and an array of HDDs will be connected to the main NAS unit.

To pull this off, I am looking into getting a Synology DS1621xs+ with 32GB (Crucial 32 Kit) using both of the two (2) NVME cache slots (Samsung 980 Pro) and six (6) 2.5" SSDs (Samsung 870 EVO 8TB), as well as connecting a DX517 with five (5) 3.5" HDDs (Segate Exos X18 18TB). My hope is that the DS1621xs+ has a way to define a set of data that will automatically transfer from the SSDs on the DS1621xs+ to the HDDs on the connected DX517. This would apply to files in the data set that remain unaccessed for longer than X time. There would also need to be exceptions as needed.

The goal is to have everything clients load land on the SSDs (DS1621xs+), which then keeps only certain programs and frequently-accessed data benefitting from SSD performance; everything else will eventually drop back to the HDDs (DX517).

Is there a way to do this with the Synology units I'm looking at? Please help.

Thanks

-Eggz
What you are looking for is hierarchical storage management (HSM). A policy based software that would migrate data between tiers of storage. I don't believe Synology has any thing like that.