Maybe then stop the node, and move data? You have about 4 hours of “free” downtime. You can use something like mergefs to maintain uptime and keep moving data in 4-hour chunks monthly while maintaining the uptime. Or a combination of the two…
At this point might as well just add a disk dedicated to storj, there is no benefit in having redundancy.
They “fixed” it by releasing their own branded nvme sticks, with artificially capped throughput and heavily under provisioned to avoid the issues described. Hardly a solution…
Depending on the chipset in your NAS and version it may support more than specified, as evidenced by other vendors releasing products on the same chipset advertising higher memory capacity support, and users reports. Memory controller is part of the CPU, so it tracks the chipset.
Why would it be? And even if it was, it’s irrelevant – the filesystem cache is managed by the host, and it will take up all unused ram. This is the reason for having more ram on a storage system – to get much better responsiveness by means of offloading bulk of repetitive IO from the disk system and not to use it all up with applications. (Synology went the other way recently by soldering half of ram in entry level devices, but this is a separate discussion)
SSDs are “spent” when they are rewritten: it’s possible for a specific combination of SSD size and workload to result in a “stable” configuration where bulk of the data has been cached and is updated infrequently, essentially serving as a static readonly cache. For example, if your workload results in random access to 40GB of data, having 41GB of cache will end up written once and read multiple times, not wearing out SSD at all. On the other hand, if you had 20GB of cache - that would be getting rewritten all the time and wear out very quickly.
What will happen with storj – is unpredictable, and customer driven, and hence, caching is likely pointless (unless your cache is equal to array size, which is a bit extreme). Instead, it may be more productive to size the cache for all other tasks, with the goal of offloading all other IO from the array, so it is available to serve storj request, along with some in-ram cache for the filesystem metadata.
This is also why having separate single disk for storj will likely yield better overall outcome than trying to mitigate abuse of the use the existing array by caching…