Delta between used space on disk and used space on dashboard

Alright, then it matches the figures again, since the output needs to be multiplied by 30. Not really a file system for many small files…

Might be a pointer in the installation manual.

So, I set up new node and shrinked my configuration to 3.7 TB

sudo docker run -d --restart unless-stopped --stop-timeout 300 -p 28967:28967/tcp -p 28967:28967/udp -p 14003:14002 -e WALLET=“0xxxxxxcccccccxxxxx” -e EMAIL="" -e ADDRESS=“” -e STORAGE=“3.7TB” --log-opt max-size=50m --log-opt max-file=10 --mount type=bind,source=“/media/pi/LaCie/identity/storagenode”,destination=/app/identity --mount type=bind,source=“/media/pi/LaCie”,destination=/app/config --name storagenode storjlabs/storagenode:latest

What is wrong here? This is my error:

ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 8, “error”: “ping satellite: check-in ratelimit: node rate limited by id”, “errorVerbose”: “ping satellite: check-in ratelimit: node rate limited by id\n\*Service).pingSatelliteOnce:143\n\*Service).pingSatellite:102\n\*Chore).updateCycles.func1:87\n\*Cycle).Run:99\n\*Cycle).Start.func1:77\n\*Group).Go.func1:75”}

It can’t contact a satellite. If it doesn’t exit after it and doesn’t show these errors for the other satellites as well, then it’s probably not that much to worry about.

Otherwise it might be DNS issues or internet connection loss in any other way (although you shouldn’t be able to login as well, of that were the case), then a reboot might help you quite often.

Edit: this is better not to do!! It can kill all your data and note!

I forgot to have the initial setup command (docker run…), no it’s working again…

docker run --rm -e SETUP=“true” --user $(id -u):$(id -g) --mount type=bind,source=“/media/pi/LaCie/identity/storagenode”,destination=/app/identity --mount type=bind,source=“/media/pi/LaCie”,destination=/app/config --name storagenode storjlabs/storagenode:latest

Uhm… You’re still having all data?
Usually, you only have to execute setup only once. And I’m not able to relate this solution to the given problem.
So I doubt whether all mount points have been bound as they should.
But if it’s working, than it’s working…

Yes, still everything there. Only my availability is down to 90% due to long offline maintenance.
This story is closedfor now.
Thanks everyone!


only solution from my understanding is to reformat your disc from exfat to ext4 like andrey described some lines before!

I still suffer from the same problem, therefore I restart my node more often now and hope this makes it any better.

You should never run a setup command for the worked node, this command only for a new identity.
This command creates all needed folders structure and verification file with NodeID of provided identity and the default config.yaml in the data location.
So, if paths were correct - the second run of this command must fail, this should protect your node from disqualification because of user error.
If you removed config.yaml and run the setup command, it will override all precautions of saving your node from disqualification, and if you was wrong (provided a wrong path to the identity and/or data location), your node will be disqualified.

TL;DR You must not run the setup command more than once for the entire node’s life (specifically - for the identity), otherwise it could be disqualified.

i have a new data I made new node, and here i see difference in 5x

and only 7 rows with error, that connection broken so it cant be 5x difference in that.

it is node on 1tb SSD so it it very fast. i started as experiment.

Yeah, might be. But since the disk use is being updated once in a few hours and especially the bandwidth still is inflated due to a bug which count cancelled updates. Aside from the overhead of the STORJ-protocol. So, this needs some additional time and correction to see the real figures.