Just had 7 hours of downtime. 4 hours yesterday

Yeah, I get that. And the report of your experience is useful even with this new system. But I don’t think we should worry about being disqualified considering this change. And it seemed many people participating in the discussion here were still assuming that at some point 5 hours offline means your node is done.

Do you know if you lost any data due to the repair system moving data off your node to online ones?

Maybe some, I don’t really know.

You won’t see the effects of that immediately. Those will be cleaned up with the next garbage collection run.

Well my node on windows finally updated to 1.12.3 in the night. sadly it didnt restart itself, so I was offline between 10pm and 5am. I dont know how long… Is there a way to see if I am already disqualified? If yes I shut it down now since it doesnt help that way…

Currently disqualification for downtime is disabled. If you were disqualified you would see a warning message on the web dashboard.

Thank you for that inof baker! But the Windows gui updater seems to have bugged still… I asked in the changelog threat how to do it manually so I can reboot the box at the same time. Now it crashed, was offline and I did not do the other updates… Uptime of the box is over a year…

I sympathise with this situation. I had two nodes on a rPi and it went down because the usb PSU went to electronics heaven. It took 4 days to diagnose the problem, complete an order online, and get the delivery or a replacement.

I suspect most of us do not want to have a UPS, server racks, failover components, and other expensive equipment to make this run.

This would mean a cluster arrangement would be the most appropriate.

A sample setup for me would be 5 rPi version 4s with two HDDs attached and then run as 10 nodes. It would be a case of spinning dinner plates on a stick but without ever having to go back and respin the plate. It’ll just work until it doesn’t

plates

4 Likes

If I decided to use 5 servers for Storj I would just set up ceph or similar, though I think ceph does not run too well on a rPi.

Why do you need more than 1 pi4?

5 rpi4’s with 2 OSDs per… maybe, maybe, it would work if you had 8GB rpi4’s and you trimmed back the RAM usage of the OSD process but I think you’d also run into an issue where you’d bottleneck at the network since you don’t have front and back network separation to deal with the chatter that happens on the back net between OSD’s. 1Gbps isn’t a lot when you start messing with more than a few IOPs- eg.10x 10TB drives can easily saturate 2Gbps.

I probably would run it on normal servers and not pi.

1 Like

Help me out here man how are these people doing this down time thing cause most if not some will fail at that time limit. Its impossible not to run the limit easy. I am in SA we have load power cuts 5 hours 4 times a month that easy does 20hours but aside from that there is full operation will all resources am really stuck

Downtime limit has been extended since I created this thread.

1 Like

Up to 288 hours of downtime in the previous 30 days

2 Likes