Original Operator - 3 Years and I'm Done

i get emails about this thread thats all. Just following up

You can PM me for 1 unused 16TB drive (below current market price ofc) - accidentally bought an extra one right before the prices went crazy :slight_smile:

Amen. All the early folks saying they’ll HODL for $1000/coin will be in for a rude awakening if buyers are willing to pay $10.

if even that

And I presume, whatever the initial market price will be, it will drop quickly…

Would be too slow for chia.

Start with a full array of chia plots, 500GB empty space and a new Storj node.
As the node gets more data, delete some plots to free up space.


I have also been running nodes for about 3 years and i’m ready to give up on them. The amount of time invested in optimizing the node is not worth the small amounts of traffic or the poor node utilization and insanely high gas fees just mean that you’re donating HDDs, electricity and your time.

I’ll keep the most profitable nodes going, but I’m cutting my losses and moving onto something that is actually worth the effort and time.

1 Like

Can you help me understand how you invested time into optimizing your nodes?
Because personally I did a lot of work in monitoring the nodes but not optimizing (apart from optimizing zfs settings for my nodes but that was more fun than anything else and has no real impact on the node’s performance).
Would be interesting to hear how you optimized your nodes.


Not the person you asked, but I personally spent quite some time trying to figure out why my nodes generate a lot of I/O—turned out btrfs is not a good choice for a node.

Strange, on my synolgy all three nodes are running on BRTFS (SHR) without any problems…

Yeah, I wouldn’t suspect it either. Yet just moving node data to an ext4 partition reduced I/O a lot. More here: -t 300 not enough - #20 by Toyoo

Another related thread: Information to SNO's regarding ZFS/btrfs vs. ext4 on spinners

While I was building I found one my biggest i/o issues in ext4 was indexing. Not necessary for this type of data.

The other major performance issue was the type of hdd technology you’re using greatly affects peak performance. Write caching mixed with a few network requests can really overwhelm certain drive types. For that you want something with a good random read/write io performance.

If you are still tweaking your zfs pool there are a lot of features that you can change to improve performance. Check out this page:

Thanks, that’s very interesting.
But ultimately this didn’t improve the performance of your node, did it?
The i/o issues are mostly due to the filewalker and garbage collection, after those are done, my HDDs hardly do anything at all.
So, even though I enjoy tweaking my zfs setup, I can’t really say I invest time to optimize my node for more income. It’s more like optimizing my setup so it uses less resources because I’m too enthusiastic about my setup :smiley: