So I decided that I would spend far more than I should this project and I am in the process or replacing a 16TB Exos HDD (my oldest node) with a 7.68TB Samsung PM883 SSD.
Yes, I know it’s probably a waste of money but I am really curious to see whether the lower latency and faster IOPS will make a difference (I am on a 1Gbps leased line so I suspect the network is as optimal as it can possibly be).
Copying 4.5TB of blobs across is taking quite a while, as expected, so I’ll keep you guys posted if I notice any difference in performance
if your successrates are 97-99% then the best you can get is 3 to 1 % more
and really the internet would be the major part of the latency.
it is possible that you may win some races that could matter… but i really doubt it… the nodes seems to distribute data very evenly between them… and latency seems a bit secondary.
ssd’s are dropping in price, soon it might not make sense to even buy hdd’s
i have thought about doing this experiment myself… never got around to it tho and i’m running with a 1tb l2arc on my node… so almost the same thing… just not quite
and i’ve been thinking about changing it because i’m not getting much from it.
I’d love to see the results. My opinion right now having a SSD Read/Write cache is that it wouldn’t be worth the extra cost. That said, if the data onboarding drastically increases it may help you fill it much faster than slower nodes. And you’d beat other nodes most of the time to downloads. But then you need to buy new SSDs faster to keep up.
Oh, for the high cost of the SSD it absolutely is completely economically unviable.
But if the increase in performance is negligible or completely non-existent, at least that should make all of us running HDDs a little bit happier to keep it that way.
Quick update, still syncing data from the HDD to the SSD. Lots of small files don’t make for quick sync.
Doing the last sync with --delete so hopefully the SSD will be online in a few hours