New node. Worked a few hours and now "no such file or directory"

Please show permissions in the data location

ls -l /mnt/MySATA_01/storj/

epyc@epyc:~$ ls -l /mnt/MySATA_01/storj
total 52
-rw------- 1 root root 11645 Jun 9 18:45 config.yaml
drwx------ 4 root root 4096 Jun 9 18:46 orders
drwxr-xr-x 2 root root 4096 Jun 9 18:46 retain
-rw------- 1 root root 32768 Jun 9 19:16 revocations.db
drwx------ 5 root root 4096 Jun 9 23:56 storage
-rw------- 1 root root 933 Jun 9 19:16 trust-cache.json
epyc@epyc:~$

I even tried removing the user from my start script.
And instead using – privileged
Didn’t help.

I ran a sudo desmeg -Tw

I see this around the time it crashes:
Is this related?

[Sun Jun 9 23:56:37 2024] veth2967f0d: renamed from eth0
[Sun Jun 9 23:56:37 2024] docker0: port 1(veth3f2094b) entered disabled state
[Sun Jun 9 23:56:37 2024] docker0: port 1(veth3f2094b) entered disabled state
[Sun Jun 9 23:56:37 2024] veth3f2094b (unregistering): left allmulticast mode
[Sun Jun 9 23:56:37 2024] veth3f2094b (unregistering): left promiscuous mode
[Sun Jun 9 23:56:37 2024] docker0: port 1(veth3f2094b) entered disabled state

Decided to try reformatting my datadir to see if that will magically help it. Never like purging 14tb but whatever.

Currently my node is at 240gb and hasn’t crashed yet. So hopefully it’s good now.

That said, i see my us1 and eu1 say online 0%.
Even though the audit and suspension are still 100%.

Saltlake and ap1 seem fine though at 100% online.
Nearly 100% of my node has been filled by saltlake.

How long until us1 and eu1 start going online again?
Should i just generate a new identity at this point?

It should update the reputation on each check-in, so every hour. But if the node is new, the total amount of audits could be low, so the downtime events hits the online score too hard. When it would start to respond on audits - the online score would start to recover slowly, but since here is a check window of 30 days, it should fully recover after 30 days online. Each downtime requires another 30 days to recover.

Yea seems to just sit at 0%.
I left that one running as is.

Figured I’d start a second node just to make sure all is well. New node is 100% on everything and looks good.

So i guess I’ll leave both running. Eventually they’ll fill.

Just glad my crashing issues are now resolved.
Even though the drives worked perfectly for storage and crypto chia… storj didn’t like multiple of them.

SOLUTION:
So if anyone out there is having reboot loops getting started, reformat the drive even though you’re 100% sure the drives are fine. Somehow it fixes it.

How is this a solution? You killed that node by deleting all the data. It will be disqualified very soon.

If you’re having issues with it reboot looping for files not found on a brand new node (a couple hours old) reformatting isn’t losing very much.

Is the newly-formatted node holding now?

1 Like

Yes seems good. Thanks for checking in.
Also made node 2 and 3 to make sure i knew what i was doing. All 3 good.

1.1tb full
0.6tb full
0.6tb full

I just wish i could speed up the trust.
More than happy to throw 1.3pb at storj…
But it’s going to take a very long time. lol

1 Like

Glad to hear it!
By creating nodes 2 and 3 you have extended the time to audit node 1. Remember all traffic is shared behind the same /24 subnet.
You could kill the two smaller nodes and wait until he first one is fully vetted (that should take a couple of months but there is talk about tweaking that time) and then add another node, wait until hat is vetted and so on.

Also, I believe the current test traffic is coming from SaltLake and it’s ignoring vetting status so you’ll start joining in on the fun from the get-go but only with test data. :slight_smile:

1 Like

Does test data pay differently? Or possibly delete?
What’s my concern with it being test data?

And you’re right. Mostly from saltlake.
Should i be taking advantage of this test data ignoring vesting and starting up even more nodes? Haha.

Thanks!

It pays just like ordinary data, but unlikely to last as long.
More nodes when you already have three empty ones is unlikely to bring you a lot more ingress (unless you’re very limited by the IOPs on each of your node’s HHD) because of the /24 subnet limitation I alluded to earlier on. So very much a “diminishing returns” kind of scenario.
It’s up to you to decide where you want to deploy your resources, of course :wink:

I have about 5 nodes running per IP address and until they start getting full of paying customer data I won’t bother deploying more.

2 Likes