Hi, new SNO here - I’ve had my node online now for about 20 days and been lurking on the forums. I’ve got my storage on Windows server Hyper-V Core 2019 and am running Storage Spaces with a spare capacity SMR drive to use for some SNO storage.
Initial setup had the storage node data just on REFS on the SMR drive, read performance poor ( sub 100) write performance sketchy (40-60) - 2nd week in, write performance died on SMR periodically down to (1-5) and causing write latency > 200ms !
I’ve now put in a SSD caching tier, so SSD cache with 7TB SMR drive - layout is non-fault tolerant. I’ve given 95GB of SSD for read/write cache, and 10GB of SSD for write-back cache (normally would be 1-2GB). It’s been in this config for 3 days so far, and while it’s still not great it looks like it’s flattened the write performance removing the spikes and averaging < 30ms write.
I’m hoping that once my node is vetted, the write’s will increase and I might have to review the write back cache again. Annoyingly the only way to change the write back cache is to blow the drive away and start again
Forgot to answer posters questions…
Think you will need to use powershell to setup tiering - new-storagetier and new-virtual disk once you created a pool, not sure if those available on Windows 10.
Adding Ssd increases complexity, and points where corruption might happen.
pros - gives a nice persistent buffer for read/writes and write back. keeps hot reads away from smr so it can fiddle with writing. cons - complex to setup on windows, reliant on os and drivers playing nice, changing tier parameters means starting from scratch