Original Operator - 3 Years and I'm Done

That would be nice, If Storj supplied reasonable amounts of data to Store. I’m filling up my drives with Chia Plots now. As my Storj sections expand I’ll delete plots as needed to make room, but I’m 3 months in on Storj and still under 200GB of space utilized. Less than a dollar a month.

Good luck with that if your solo mining :slight_smile: please report back in 3 months on how much XCH you’ve farmed.

Storj is slow to get going, I’m only 5 months in myself and the data stored is low, but all new nodes are the same - we missed the good times of SB test servers and TB’s a day… But the tech is good, and the dev effort is good so it’s worth holding for the time being I think.

1 Like

It does seem to affect me. I will likely need some more storage space for backups in the next few weeks, and I planned to buying a new drive, but I do observe shortages now and I might need to cannibalize some of my Storj nodes instead.

Why don’t you store your backups in storjdcs? It’s really cheap now…in fact why don’t you store chia plots there too ;@

3 Likes

I’d have to re-engineer my current backup approach. borgbackup doesn’t have an explicit support for cloud storage, and the last time I tried using a networked file system as a back-end for borgbackup, locking was a problem.

Besides, this backup is the on-site one, supposed to be quickly available.

Feeling better and better about my exit by the day. Made a great play

don’t forget to check back in around 2024

1 Like

Yet you can’t seem to say goodbye to the forums. What’s the point? I’m genuinely wondering.

As for Chia, I’m reluctantly giving it a try, but at the same time hoping that when trading starts in a few days, we’ll all realized it’s worthless and HDD pricing can get back to normal. I’m not in a rush yet, but I would like to get some new HDDs at some point and not really looking to massively overpay.

i get emails about this thread thats all. Just following up

You can PM me for 1 unused 16TB drive (below current market price ofc) - accidentally bought an extra one right before the prices went crazy :slight_smile:

Amen. All the early folks saying they’ll HODL for $1000/coin will be in for a rude awakening if buyers are willing to pay $10.

if even that

And I presume, whatever the initial market price will be, it will drop quickly…

Would be too slow for chia.

Start with a full array of chia plots, 500GB empty space and a new Storj node.
As the node gets more data, delete some plots to free up space.

4 Likes

I have also been running nodes for about 3 years and i’m ready to give up on them. The amount of time invested in optimizing the node is not worth the small amounts of traffic or the poor node utilization and insanely high gas fees just mean that you’re donating HDDs, electricity and your time.

I’ll keep the most profitable nodes going, but I’m cutting my losses and moving onto something that is actually worth the effort and time.

1 Like

Can you help me understand how you invested time into optimizing your nodes?
Because personally I did a lot of work in monitoring the nodes but not optimizing (apart from optimizing zfs settings for my nodes but that was more fun than anything else and has no real impact on the node’s performance).
Would be interesting to hear how you optimized your nodes.

3 Likes

Not the person you asked, but I personally spent quite some time trying to figure out why my nodes generate a lot of I/O—turned out btrfs is not a good choice for a node.

Strange, on my synolgy all three nodes are running on BRTFS (SHR) without any problems…

Yeah, I wouldn’t suspect it either. Yet just moving node data to an ext4 partition reduced I/O a lot. More here: -t 300 not enough - #20 by Toyoo

Another related thread: Information to SNO's regarding ZFS/btrfs vs. ext4 on spinners

While I was building I found one my biggest i/o issues in ext4 was indexing. Not necessary for this type of data.

The other major performance issue was the type of hdd technology you’re using greatly affects peak performance. Write caching mixed with a few network requests can really overwhelm certain drive types. For that you want something with a good random read/write io performance.

If you are still tweaking your zfs pool there are a lot of features that you can change to improve performance. Check out this page:

Thanks, that’s very interesting.
But ultimately this didn’t improve the performance of your node, did it?
The i/o issues are mostly due to the filewalker and garbage collection, after those are done, my HDDs hardly do anything at all.
So, even though I enjoy tweaking my zfs setup, I can’t really say I invest time to optimize my node for more income. It’s more like optimizing my setup so it uses less resources because I’m too enthusiastic about my setup :smiley:

2 Likes