Unfortunately, we do not have any information on this at the moment. Should the product become available to us through our suppliers, we can check a possible inclusion.
Rack rails must be purchased separately. The corresponding rack rails from Synology are very good, but they are almost a bit "overkill" for this specific case, as the Synology rack rails are about 30 cm longer than the RS1219. This is not a problem in itself, but you must have a rack where these long rails fit.
The best thing is to read through the following link. There are a few things to keep in mind, but they should be feasible:
https://www.synology.com/de-de/knowledgebase/DSM/tutorial/General/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later#t2.2
Is there anyone out there who can answer my question:
RAID1 in bay 1 and 2. In the last bay, a hotspare disk that simply integrates itself directly into RAID 1 as soon as a disk fails.
Question: Now RAID 1 in bay 1 and in the last bay = unattractive. What happens if I replace the failed disk in bay 2? Does the hotspare then become the hotspare again or does bay 2 become the hotspare? This is a complicated question that I couldn't find out anywhere on the internet. Otherwise I will have to carry out a practical test.
Thanks =)
I can't answer that for the Synology RS1219+ with absolute certainty, as I've only had it for a few weeks and no HD has failed in it so far.
In all likelihood, the Synology RS1219+ will not behave any differently than the Qnap models. If a disk configured as a hot spare is used, it remains active in the RAID. However, the replacement disk for the failed one does not automatically become the new hotspare, but remains unassigned for the time being. I had to manually assign the newly inserted disk to the RAID as the new hotspare. Therefore, over time/years, the hotspare in one of my 8-bay NAS moved through the bays.
This way, the RAID is only rebuilt once with the hotspare, which makes sense because it takes a while (several hours/days depending on the HD storage volume and the type of RAID).
If the replacement disk were to be reintegrated into the RAID and the activated hotspare were to be made into a hotspare again, either several TB would have to be copied or the RAID would have to be rebuilt again - thus causing a very long process a second time, which makes little sense.
Of course, you can force this manually by pulling the spare disk in the last bay after reconfiguring it as a hotspare and making it a hotspare again after reinserting it. However, this also makes little sense, because it causes very long processes twice. I only played this game once about two decades ago. After that, I was able to come to terms with the fact that I don't really need to know which disk is currently hotspare to my RAID6 with 7 active disks. You can see it quickly when you log in administratively.
I have been using the RS1219 for about 6 months and am satisfied. It's much quieter than the big Synology Diskstation in 19" rack format and also much shorter in length, so that smaller/shorter 19" racks also fit. The processor is a bit weak on the chest, so if you use it in an office environment with many users, it could be tight depending on the workload, especially if the periodic raid/data scrubbing is also running in the background.
Regarding your assumption of "SSD longevity" in a raid array, I wouldn't be so sure and would research this carefully if I were you. The reason is that Synology offers 2 special rack stations with a special file system "RAID F1" for use with SSD, namely the models Flashstation FS3017 and FS2017. I recommend that you study the Synology white paper on this specific topic (available on the respective product page).
https://www.synology.com/en-global/products/FS2017
I am only quoting from Synology FS2017 website:
Due to the peculiarities of SSD and RAID technology, a common challenge lies in how to prevent all of the drives from failing at the same time because of the evenly distributed workload. RAID F1 alleviates this problem with a specially designed algorithm to unevenly distribute workload to drives, enhancing the resilience of the storage pool and ensuring your data remains safe.
Here is an excerpt from the Executive Summary of the Synology White Paper on this topic:
However, SSDs have a finite number of program-erase (P/E) cycles. If traditional RAID is used for random
write workloads, multiple SSDs will probably be worn out and fail at the same time,
resulting in a crashed RAID and data loss. Synology RAID F1 algorithm tackles the
problem by writing more parity bits into a specific SSD to avoid all SSDs from being
worn out at the same time, and making one system-assigned SSD to be worn out in
the first place. With this approach, RAID F1 will be very unlikely to crash as data
are unevenly written to SSDs. Thus, Synology RAID F1 enhances the endurance of
RAID compared to other RAID algorithms, an important concern for enterprise flash
storage products.
Personally, I have no experience with Synology and SSD for large data volumes (50 GB+). In that sense, I can't give you a recommendation, but just wanted to make you aware of the topic & the potential risks.
Have fun