What's going on with saltlake?

Saltlake.tardigrade.io was my most profitable node in the last 18 months generating $3-$7 monthly, since mid November the egress dropped to 30MB/d and 1 GB/d of repair.
Is this node shutting down?
Should I :
1 wait for it to resume normal operation?
2 wait for it to shut down?
3 start a graceful exit from this node to free up HDD space mostly used by this zombie node?

1 Like

Saltlake is a satellite used by Storj for testing. So any data flow pattern is possible. I don’t believe they will shut it down, but they might reduce the data flow, because they have to pay the node operators for it.
You can see the overall ingress and egress for the last 30 days from this satellite here: Grafana

3 Likes

I would recommend against that. This satellite aside from testing is also usually used for incentive traffic. This can be quite profitable from time to time. Exiting would cut you out of that potential income. Either way, you’re still paid for the data you store from this satellite.

I also noticed that not all data will increase the payout the same way because of different download traffic but on the other side I don’t want these small pieces on my disk that slow down every startup and GC run. The data on saltlake if actually very healthy for my storage node. I will keep it even if it generated less download traffic.

Yeah, I agree. Small pieces are a lot more bothersome than large infrequently downloaded ones. But of course that depends on what your constraints are. I have plenty of HDD space, so I don’t mind relatively static data. But IO peaks impact my system performance, so many small files are a pain for IO load during GC and startup. I’d rather save on IO and spend on HDD space. The thing is that small files are also a pretty bad experience for customers is it doesn’t help performance on their end either. I still hope customers wise up to this a little more and start to cluster things better.

I finally moved the node on my Drobo unit to more performant storage though… This is definitely where the IO pain hurt the most (frequent GC runs of 20+ hours etc). It was a nice POC, but lets just say that running a node on a USB2 connected Drobo unit using SATA 2 internally and NTFS on a Linux system is kind of the perfect storm of horribleness.

Same here except that I found out what the root cause for my heavy IO was. It turns out GC in combination with atime is deadly. Disable atime and GC finish suprisingly fast.

Synology uses relatime and I don’t think it can be changed easily. It hasn’t been a problem for me and I doubt relatime helps much since it still updates the atime every 24 hours. Unless GC accesses a file multiple times maybe?

But aside from a small 2TB node, everything else is now running on arrays with SSD cache. So that probably sorts out that IO issue anyway. That said, the 2TB node doesn’t take that long either without SSD cache. And considering the Drobo node was only 1.7TB, I know how bad it can be. So I’m not complaining.