Node went offline - Emptying trash failed

My node just went offline recently and I couldn’t found a way to make it go back on. Rebooting or restarting docker won’t help.
Here’s the docker log.
I’m using a 500gb hdd drive but after formatting it’s only 450gb. Everything runs smoothly until now where it says total disk space less than requirement

2020-02-29T03:16:06.977Z	INFO	Node 12LWkyBdmn4mYQAwc2DvJKusKFv123Hif9HDhFkRHGNFAR7qe3E started
2020-02-29T03:16:06.987Z	INFO	Public server started on [::]:28967
2020-02-29T03:16:06.988Z	INFO	Private server started on 127.0.0.1:7778
2020-02-29T03:16:07.000Z	INFO	piecestore:monitor	Remaining Bandwidth	{"bytes": 1977997311744}
2020-02-29T03:16:07.000Z	ERROR	piecestore:monitor	Total disk space less than required minimum	{"bytes": 500000000000}
2020-02-29T03:16:07.001Z	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:127\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:329\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/common/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-29T03:16:07.003Z	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:127\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:329\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/common/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-29T03:16:07.003Z	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:127\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:329\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/common/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-29T03:16:07.004Z	ERROR	version	Failed to do periodic version check: version control client error: Get https://version.storj.io: context canceled
2020-02-29T03:16:07.004Z	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:127\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:329\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/common/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-29T03:16:07.005Z	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:127\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:329\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:87\n\tstorj.io/common/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-29T03:16:07.029Z	ERROR	piecestore:cache	error getting current space used calculation: 	{"error": "context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
Error: piecestore monitor: disk space requirement not met

Yes, as it shown.

https://documentation.storj.io/before-you-begin/prerequisites

So the only fix now is to replace the HDD with a better one? What. happens to the data in my old hdd?

You would need to migrate your node from this HDD to another bigger HDD that has at least 500GB of actual free space and 10% more as overhead. 500GB HDDs don’t actually have 500GB as free.

https://documentation.storj.io/resources/faq/migrate-my-node

It says copying all the files in /home/pi/.local/share/storj/identity/storagenode and update the --mount parameters. Do I need to move all the contents in my 475GB HDD or I just plug a new bigger drive in and that’s all?

You need to copy your identity folder and storage folders to new drive. You would then update the docker command with correct path.

1 Like

You have to migrate your data to the bigger disk. Except that your OS have the function to extend the space such as SHR on Synology.

1 Like

Please, do not extend the disk such way. This is would be a RAID0. With one disk failure the whole node will be lost.

1 Like

SHR is actually RAID1/5 based, depending on the number of HDD’s. SHR-2 is RAID6 based, Both of those would be ok. But in most cases it would be better to run multiple nodes. I have no idea whether the OP has Synology though, so that’s quite a leap to advise a Synology specific solution.

Most likely you’ll be best of just copying the data to a new larger HDD and run the node again from there.

3 Likes

Okay so here’s what I’ve done so far.

  • Rsynced the storage node folder on my Pi to my computer.
  • Removed the storage node folder on my Pi and delete the storage node docker
  • Rsynced back the storage node folder to Pi and start a new storage node
  • When I start the docker and check logs, it still says total disk space less than minimum.
    Here’s the log:
    https://hastebin.com/ihogavonuz.coffeescript
    When I run the audits_satellites script.
    The error it give is: Error: piecestore monitor: disk space requirement not met.
    I’ve also see that all my successful audits is gone and is replaced with 6 recoverable failed audits.
    Is there something I can do to fix this situation? Thanks a lot.

Show output of this command:

df -h

1 Like

Here it is.

>         Filesystem      Size  Used Avail Use% Mounted on
>         /dev/root        15G  6.4G  7.4G  47% /
>         devtmpfs        484M     0  484M   0% /dev
>         tmpfs           488M  656K  488M   1% /dev/shm
>         tmpfs           488M   56M  433M  12% /run
>         tmpfs           5.0M  4.0K  5.0M   1% /run/lock
>         tmpfs           488M     0  488M   0% /sys/fs/cgroup
>         /dev/mmcblk0p1  253M   54M  199M  22% /boot
>         tmpfs            98M     0   98M   0% /run/user/999
>         /dev/sda1       916G  228M  870G   1% /mnt/storagenode
>         tmpfs            98M     0   98M   0% /run/user/1000

How much space did you allocate to the node?
Please post your docker run command

1 Like

I assigned 920GB to it.

docker run -d --restart unless-stopped -p 28967:28967 \
    -p 14002:14002 \
    -e" \
    -e EMAIL="" \
    -e ADDRESS="[redacted]:28967" \
    -e BANDWIDTH="3TB" \
    -e STORAGE="920GB" \
    --mount type=bind,source="/home/pi/.local/share/storj/identity/storagenode",destination=/app/identity \
    --mount type=bind,source="/mnt/storagenode",destination=/app/config \
    --name storagenode storjlabs/storagenode:beta

This should be 800GB to be safe.

2 Likes

You can remove your email-adrress and wallet [from the post].

You are aware that you assigned more HDD space than your actual HDD offers?

2 Likes

I’m completely unaware of that. After reformatting it says I have 935GB so I assigned 920GB to the Node. I guess this is the culprit

You should assign 10% less then maximum available space

3 Likes

Thanks for your answer. Now that’s solved can you take a look at this as I see some error relating to blobscache and uploading failed. Should I be worried about this?
2020-03-02T07:23:50.336Z ERROR blobscache trashTotal < 0 {“trashTotal”: -23488512}

2020-03-02T07:23:50.356Z INFO version running on version v0.33.4

2020-03-02T07:23:50.628Z ERROR blobscache trashTotal < 0 {"trashTotal": -156887808}

2020-03-02T07:23:51.371Z ERROR blobscache trashTotal < 0 {"trashTotal": -236288512}

2020-03-02T07:24:36.054Z INFO piecestore upload started {"Piece ID": "TDCOMOPM3IHCXD5FBG4554ADY2SEHEEIO3Y75ZDFL2XTX3ZJAQQQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Bandwidth": 2999041998848, "Available Space": 795864057600}

2020-03-02T07:24:38.702Z INFO piecestore upload failed {"Piece ID": "TDCOMOPM3IHCXD5FBG4554ADY2SEHEEIO3Y75ZDFL2XTX3ZJAQQQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:394\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-03-02T07:25:22.445Z INFO piecestore upload started {"Piece ID": "K7KAGBXV74VSUHRTW4DLEBH6NZTWFR2PKKGKTYI7VQGCBE6XXNTQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Bandwidth": 2999041212416, "Available Space": 795864057600}

2020-03-02T07:25:26.371Z INFO piecestore upload failed {"Piece ID": "K7KAGBXV74VSUHRTW4DLEBH6NZTWFR2PKKGKTYI7VQGCBE6XXNTQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:394\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-03-02T07:26:03.680Z INFO piecestore upload started {"Piece ID": "QYYF4KGSAGXGX4NWGSJSNCRQFOI4BCXZYUZLUJUQ7VWFRZJG6PUQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Bandwidth": 2999040688128, "Available Space": 795864057600}

2020-03-02T07:26:07.783Z INFO piecestore upload failed {"Piece ID": "QYYF4KGSAGXGX4NWGSJSNCRQFOI4BCXZYUZLUJUQ7VWFRZJG6PUQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:394\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

not sure about these.
Upload failed with context canceled are normal, as long as some uploads are successful

1 Like