My Raspberry Pi 3 node keeps on restarting for some reason

My Raspberry Pi node keeps on restarting for some reason. I have looked at other forum posts which discuss similar issues, however, none of the fixes seem to work for me.

Welcome @vogonp42 ! I’ll bring this to the team for some further opinions. If you have any additional details that might help them discover the issue please do post them here.

Hey @vogonp42 !
Could you post the logs of the failing containers?

List them like this:
docker ps -a
docker logs XXXX where XXXX is one of the container IDs, which likely exited with status code 1.

Hello @vogonp42 ,
Welcome to the forum!

Please, post 20 lines from the log: How do I check my logs? - Node Operator

3 Likes

Actually, It started working for no apparent reason. I don’t really know how. I may actually investigate further in order to see if I can figure out what I did.

Orange pi x64, last version. When looked node dashboard, aplication into cople time(maybe hour) will crashed. I was looking log - nothing up.

Please, search for OOM in system logs
Something like

journalctl | grep -i oom

Search for errors in the logs of the storagenode container:

docker logs storagenode 2>&1 | grep ERROR | tail

Bad news, I alredy cleaned all of logs, but i will try repeat bug for a now. When version 1.34.3 was pasted

I have been doing investigations on my node to figure out why it started working. I think that it was a mounting issue which was fixed upon reboot and replug, which I did when moving the node to a location closer to the router so I did not have to use my long ethernet cable.

1 Like

docker logs storagenode 2>&1 | grep ERROR | tail
2021-10-01T12:25:22.299Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “YFJN7BTEHXRSB6BNH2WWRH6HIQOILHGIMGTDSV7RJ4IXVBOHGXAQ”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.514Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “XA6LWUJCUSF6TSE4NDN7EET3UCTXVISADGNIDWQI6IOQBDOSWJNA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.577Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “JJRICZT4U5IV7PVYHXTUNCT2RBE4GH5RYFMICK2S3TA7IQ2Q4KNQ”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.604Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “RQPDKIJIO5M3N357W3BO7KSJBQ56HFDLB67VPQONRAAOJ5G2F6DQ”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.643Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “EVXESWBD6KYW5RC5QSXO55MNVYJQ3TMJ4PLFAYAGZVDXMCHQGZYA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.682Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7WTCRCHVLZX43ZXAPJDGLEUZK3QCN7ZWBA5ZXGICSQAVGQSP5OWA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:22.960Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “MBL6ZXTMBVFHBEZVVE2THK5HKJO5Z6B4BB76ENSXO2N4VZLG3GOQ”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:23.082Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “SUYT5QLDJRKBZNO7RNPCMPR3VUUMWXBQYXGVKNAR5TPON4GACVQQ”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:23.139Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “FYFZMDYSWWNNWB7J5Q5ICVUX2HZBXONIKQ5R6JZQVRPITN3IB57Q”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}
2021-10-01T12:25:23.170Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSRGDMBQ4V6K3DA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func
2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group)
.Go.func1:57”}

journalctl | grep -i oom
Oct 01 15:24:50 orangepioneplus kernel: cron invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_a
dj=0
Oct 01 15:24:50 orangepioneplus kernel: oom_kill_process+0x200/0x208
Oct 01 15:24:50 orangepioneplus kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Oct 01 15:24:50 orangepioneplus kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom
,task_memcg=/docker/f9870f9b3bac336ce21d942c8d725ce51f8bbe56fd5fecf53fbd7a1aa04dda2e,task=storagenode,pid=30076,uid=0
Oct 01 15:24:50 orangepioneplus kernel: Out of memory: Killed process 30076 (storagenode) total-vm:1969616kB, anon-rss:242684k
B, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:2636kB oom_score_adj:0
Oct 01 15:24:50 orangepioneplus kernel: oom_reaper: reaped process 30076 (storagenode), now anon-rss:0kB, file-rss:0kB, shmem-
rss:0kB

Please, give me a history for that piece starting from upload to your node, otherwise it’s useless unfortunately. The repeated messages can be omitted.

Piece ID T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSRGDMBQ4V6K3DA:
2021-09-15T13:30:20.165Z INFO piecestore upload started {“Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSR
GDMBQ4V6K3DA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 1534
187681232}
2021-09-15T13:30:28.258Z INFO piecestore uploaded {“Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSR
GDMBQ4V6K3DA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Size”: 2319360}
2021-09-30T14:34:31.204Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSRGDMBQ4V6K3DA”, “error”: “pieces error: database
is locked”, “errorVerbose”: “pieces error: database is locked\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:319\n\tsto
rj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tsto
rj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecy
cle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/s
ync/errgroup.(*Group).Go.func1:57”}
2021-09-30T14:59:52.351Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSRGDMBQ4V6K3DA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.fun
c2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group
).Go.func1:57”}
…
2021-10-02T07:07:43.943Z ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9
G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “T6OJROQLXN5WC2F3VOETIIWCHKCYTIGOS2K5PWSRGDMBQ4V6K3DA”, “error”: “pieces error: filestor
e error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/f
ilestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storag
enode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagen
ode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(
*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.fun
c2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group
).Go.func1:57”}

Seems your disk cannot handle the load. If you see a lot of such errors, I would recommend to restart the container, however it would not be a solution. I hope the disk is not NTFS.

The piece either lost or deleted before. You can try to recreate it to allow to delete:

fs is btrfs. I don’t care about delete error for now. I am care about oom-kill, storj often crashes . Could I, some how, limit used ram( approximately 1 year ago i was try limited via docker, it was dosen’t worked properly). I have only 1Gb ram
The reason of fall is (I think so):

  1. dowload data(upload work fine) +
    used web-gui

this is the reason. See Topics tagged btrfs
It has much more high memory consumption than ext4. This FS is also the reason for slowness: it uses COW by default an this makes disk slower, it also doesn’t handle locks so well. I would recommend to migrate to ext4 if possible.

Are you using a official rpi power supply
Mine (rpi 3b+) node was super unstable
Until i got a power supply made for it