Node got disqualified

I got a disqualification notice on a new node. i am not sure what to do The node seems to be working.
These are the last 20 lines from the log:
rock64@rock64:~$ sudo docker logs --tail 20 storagenode
2020-06-11T22:47:24.585Z ERROR piecedeleter delete failed {“Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Piece ID”: “NWBPXJPKVR2DYBG5TQCNB4P7I26VXZKT5RQCO5OCVAO5MIBBY26Q”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:96\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2020-06-11T22:47:34.454Z ERROR piecedeleter delete failed {“Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Piece ID”: “OCGOBWCWWWOYKTPYKJHBE4AUVEUFPZL2AL2J2POSNOMNTOU6XHNA”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:96\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2020-06-11T23:00:55.245Z INFO bandwidth Performing bandwidth usage rollups
2020-06-11T23:00:55.260Z ERROR piecestore:cache error persisting cache totals to the database: {“error”: “piece space used error: disk I/O error”, “errorVerbose”: “piece space used error: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdatePieceTotals:174\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:100\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:80\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2020-06-11T23:00:55.324Z ERROR bandwidth Could not rollup bandwidth usage {“error”: “disk I/O error”}
2020-06-11T23:03:52.588Z INFO piecestore download started {“Piece ID”: “RFECRT7T3EGJZY6FQGHWUJMHKMNFOTSXASAHC6GYABKGI4ED7INA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_AUDIT”}
2020-06-11T23:03:52.600Z ERROR piecestore download failed {“Piece ID”: “RFECRT7T3EGJZY6FQGHWUJMHKMNFOTSXASAHC6GYABKGI4ED7INA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_AUDIT”, “error”: “usedserialsdb error: disk I/O error”, “errorVerbose”: “usedserialsdb error: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:459\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-06-11T23:04:55.583Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 17}
2020-06-11T23:04:55.583Z INFO orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW sending {“count”: 1}
2020-06-11T23:04:55.585Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 97}
2020-06-11T23:04:55.586Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {“count”: 92}
2020-06-11T23:04:55.588Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {“count”: 205}
2020-06-11T23:04:55.589Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {“count”: 19}
2020-06-11T23:04:55.948Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2020-06-11T23:04:56.002Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2020-06-11T23:04:56.059Z INFO orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW finished
2020-06-11T23:04:56.190Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2020-06-11T23:04:56.591Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2020-06-11T23:04:56.870Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2020-06-11T23:04:57.105Z ERROR orders archiving orders {“error”: “disk I/O error”}

Any ideas on how to resolve this?
Thank you
Ilan

How is your HDD connected ?

This error is pretty clear. Did you start the node with a new identity? If you used the old one it will treat it as the same node and expect all data from the old node to be there. In that case it unfortunately means your old node was disqualified.

It could also be HDD related issues as @nerdatwork suggested. It would be nice to know what model it is and did you mount it with fstab as well as how it’s connected?

Hi, @nerdatwork and @BrightSilence
Thanks for the support. This is a new node that I started after a network disaster took all of my old nodes offline for a few weeks. I reformatted the SD card and reinstalled Linux and the Storj components. I deleted all of the old files from the hard drive and cleanly started the new node. (At least I hope I did…) The node has only been online for two days, and yesterday evening it got disqualified, and there was an issue with the hard drive not mounting correctly.
I never had this issue before. I run the command: sudo mount -a
and was able to mount the drive correctly. I then run the command ls /mnt/Seagate
and saw that all the files were there. I then restarted Storj and checked the Docker dash command and saw that the node seems to be online and operating correctly.
I am not sure why the system is looking for the old node credentials. Is there a way to avoid it?
The hard drive is an 8 TB Seagate drive and is mounted with fs/tab and is connected to the Rock64 machine by the USB3 port.
Any ideas on how to avoid this in the future?
Thanks
Ilan

I would like to suggest you to use a subfolder on the disk, like this:

Also, the external disk should have an additional power supply. The only USB is not enough.

It is mounted on fstab.
The hard drive has its own power supply on a UPS. So, power failure should be less of an issue.

Ilan