I can see that node flick in rrestarting and connected

same problem here
doing a
docker ps -a
I can see that node flick in rrestarting and connected, infact dashboard it takes sometime to load.
Everything happened after last update

The node acted good until yesterday.
looking in the config.yaml all the fields are empty, meaning not compiled (e.g. the contact.external-address: “” -> completely empty) I think that’s because of the false docker start, which keep restarting so it feels like it can’t write on the config.yaml

CONTAINER ID        IMAGE                        COMMAND                  CREATED              STATUS                         PORTS               NAMES
XXXXXXXXXXXX        storjlabs/storagenode:beta   "/entrypoint"            About a minute ago   Restarting (1) 2 seconds ago                       storagenode
XXXXXXXXXXXX        storjlabs/watchtower         "/watchtower storage…"   6 days ago           Up 2 minutes                                       watchtower

It just keeps restarting.

Docker it’s not recomended option in the Storj manual. Many people has had issues with Docker.

You should go for GUI Installation:
https://documentation.storj.io/setup/gui-windows
https://documentation.storj.io/setup/gui-windows/storage-node

Even if you get it to work, Docker seems to be continuously failing, and needs surveillance to avoid being disqualified

sorry for the misunderstanding… I’m on a raspberry

Show output of this command

docker logs --tail 20 storagenode

2020-03-05T11:26:49.219Z        INFO    piecestore:monitor      Remaining Bandwidth     {"bytes": 30000000000000}
2020-03-05T11:26:49.220Z        WARN    piecestore:monitor      Disk space is less than requested. Allocating space     {"bytes": 28238868480}
2020-03-05T11:26:49.220Z        ERROR   piecestore:monitor      Total disk space less than required minimum     {"bytes": 500000000000}
2020-03-05T11:26:49.221Z        ERROR   piecestore:cache        error persisting cache totals to the database:  {"error": "piece space used error: context canceled", "errorVerbose": "piece space used error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdatePieceTotals:174\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:100\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:80\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-03-05T11:26:49.222Z        ERROR   version Failed to do periodic version check: version control client error: Get https://version.storj.io: context canceled
2020-03-05T11:26:49.222Z        ERROR   gracefulexit:chore      error retrieving satellites.    {"error": "satellitesdb error: context canceled", "errorVerbose": "satellitesdb error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:103\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:115\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:54\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
Error: piecestore monitor: disk space requirement not met

Show your docker run command and remove any personal info from it.

sudo docker run -d --restart always -p 28967:28967 -p 14002:14002 -e WALLET="wallet" -e EMAIL="MAIL" -e ADDRESS="my.personal.ip:28967" -e BANDWIDTH="30TB" -e STORAGE="3TB" --mount type=bind,source=/my/destinationof/identity,destination=/app/identity --mount type=bind,source=/mnt/drive,destination=/app/config --name storagenode storjlabs/storagenode:beta

You can show your path to make sure your identity files are accessed properly.
Show output of this command.

df -h

Thank you everybody for your suggestions.
I found out where the problem was. Some Background:
Storagenode on a Raspberry, HDD is a WD My Cloud Ex2 Ultra (4TB)
As my HDD is mounted on the network, I’m able to access as a NFS partition on the local network. This means that the

It wasn’t showcasing my partition, while somehow raspberry was still showing the /mnt/storagenode folder with files inside, even if this wasn’t mounted anymore.
Did some research and discovered that my /etc/fstab file wasn’t working at boot, basically I had to mount the NFS HDD at every boot.
The fstab wasn’t working because the HDD is mounted through wifi, raspberry couldn’t connect fast enough to wifi during boot.
Use a raspi-config → turning on the Wait for Connection in the Boot Options section.
Everything works fine now!

Thanks everybody for the help!
Stay safe, Italy is a desert lately…

1 Like

Please, immediately stop your node, unmount the drive and mount it to a different folder, then move the blobs from the /mnt/storagenode to the the mounted in a different folder disk.
For example, if your blobs in the /mnt/storagenode/storage/blobs when the disk is unmounted, and the new mount point have the similar path after the mount your disk there, for example /media/storagenode/storage/blobs, you should copy it like this:

cp -r /mnt/storagenode/storage/blobs /media/storagenode/storage/

Now you can unmount your disk from the temporary folder (/media/storagenode in our example) and mount it normally sudo mount -a

However, I would like to suggest you to look on your web dashboard, it could be too late and your node is disqualified already because of storing data in mountpoint instead of the disk.
This is because it’s hided when the disk is mounted and vice versa…

It’s not recommended to run a storagenode with a storage connected via network, especially NFS: https://forum.storj.io/tag/nfs
And moreover - via WiFi.
FYI, the Raspberry Pi3 is limited to 100Mbit for the Network, which you will use for communication and for storage transfers.
Your node will lose almost every piece in the race because of latency

1 Like

OK thanks for the suggestion… Did everything, Storagenode seems fine and online right now…
there is any technical doc on this very problem?
Will research a bit into the NFS… I wanted to take advantage of this 4TB wifi module I have unused, but if I find out it’s not efficient I’ll dismiss it and make a wire config…

It’s not just that it’s inefficient. Sqlite databases used by the node can get corrupted when used over NFS. Which could either break your node or lead to disqualification as well. The only network protocol that can work without issues is iSCSI. But because of latency it is very much recommended you directly attach the storage to the node.