Questions about multiple nodes on Pi4+

Beter to spread it to other raspberry pi, because when you get first node full, and second, CPU will be problem to handle all this. Also you can make third one with first 5tb disk.

It should be publicip:28968, right?

It’s probably better, but I wouldn’t be surprised if a single RPi4B could handle 16TB. Mine isn’t there yet, but it’s already handling around 8TB rather well. But again, I’m not there yet, so… ^^’

Looks like you’re targetting folders used by your first node, instead of targetting the ones dedicated to your second one? Careful with that, if your 2 nodes use the same identity by mistake, this could kill both nodes pretty fast…

2 Likes

I think you’re right. I’ve checked with the instructions about port-forwarding, and thought this was the container pointing to 28967. But I now realize that’s just a default setting I only have to cover wither the -p settings.

I checked thoroughly of course before starting the new node. My mnt structure is unique so I’ve rewritten that everywhere. But somehow, it creates a node with default settings in that first “setup=true” part. (before the node run code)

Fixed it! Got my second node running.

Made a few mistakes, not super sure yet. But there was a folder permission issue of the new storagenode folder. Maybe b/c my first setup code wasn’t good and I had to redo that step a few times. I now moved the folder to a .bak version. And all went as expected.

I also made the wrong conclusion while figuring out the port/routing about the public port. Thanks to @Bivvo I corrected this.

Additionally, I didn’t route my dashboard port like this 14003:14002.

1 Like

YAY :partying_face: great to hear

1 Like

Now let’s see if the RPiB4 can handle two 8TB nodes hahaha. I can always migrate one to another machine if this will become an issue. But I think for this stage it’s allright - as it will be a long time before any of the two will be fully used. I will setup some more extensive monitoring for this.

Thanks for the headsup @Vadim. I never considered this, as I read about a lot of operator who have 2/3 nodes on a single PI. Keep on learning!

Quietly sits here and runs 2x full 1.7TB nodes and 1x 7TB node at 1.3TB used on a single RPi3… :wink:

3 Likes

@Craig Hahaha. That’s what I mean!

Long as your not running with SMR drives then it shouldnt be a problem at all. I have a 8gig pi4 and it runs 3 nodes no problems at all. But you add all SMR drives you will not be having a good time.

@deathlessdd you mean you partioned the 8TB HDD into multiple nodes?

I never considered this. I have now two 8TB HDD. One is to replace the 5TB SMR drive (3.1TB full) Rsync job is running for days now.

The other 8TB to start a new node. I could consider indeed starting more nodes on this new one. Is there any benifit to do that?

Mine are connected through usb in a raid enclosure.

No no thats not what I mean, Unless your running a raid I wouldnt run more then one node on a single drive. Im just talking about running SMR drives all on the Pi4 itself would be a very bad time for it.

SMR drives are good when there full but when there filling up they max out IO and if your running them all on the same enclosure that might cause issues down the line, Which ive experienced once or twice I had to separate my SMR drives on there own enclosures, and there own system all together because it just causes issues for the system over all.

@deathlessdd ah, I read your comment too quick. I understand. As stated I’m phasing out the SMR as advices earlier in this thread. So I should be alright with 2x 8TB in a short time.

1 Like

@andrew2.hart do you commit any PI resources to certain nodes? Like dedicating a processor core or something? Or any other specific setting to manage 5 nodes? Or do you just let is fend for itself :smile: ?

That’s actually an old picture. I am always messing about with my nodes.
Since then I moved the nodes to small 2.5" disks and then back to one of the big disks in that picture.
The ssd wore out but was a really good node until then.
I let linux and docker sort out resource use. I did try some memory/buffers settings to try to make it use more memory but gave up

That will happen already 6 months from the beginning. Had that on my RPi4B.

One smr could be a problem but each extra smr makes the problem less. I think

Yes, probably. That’s because the more nodes you have, the more spread the load is amongst them.
So 1 node on an SMR drive might get overloaded, whereas 3 separate nodes living on their own smr drives would only receive 1/3rd of the load each, which may be bearable for these SMR drives.

(That does not apply if nodes are on different /24 subnets - also this asumes that all nodes have free space)

1 Like