What could be the error?

Hello there,
so since 1.6.2020 my node brings min. one time a day the error “bandwidthdb locking protocoll”
But when i restart my node the error is gone and the node run´s as always.

So i noticed that since 29.5.2020 my ISP have about 30-50 sec a day no Internet but could that be the reason?

I just want to know because i´m very sad about looking almost every hour if the node run´s or not and also i got banned from europe-north (but i had have this and that was only because my node where to long >3h offline[also because of this problem])

Could you please post the entire error?

Currently disqualification for down time is disabled. So something else must have gone wrong on europe north. Either your data was inaccessible or missing. That’s the only reason you can get disqualified right now.

Sorry no i can’t because every time i restart the node the errors will be flushed.

The internet timeout or how you want to say is because my ISP is currently working on the line’s

And i know there is no other error because my other server who also make move and delete files on the Storage Server does just work fine and after a restart of the node the node works as fine as well.

And i only had downtime so it must be that?

(Downtime in this month because of bandwithdb locking errors is above 40 hours because i always need about 3 to 4 hours to see tht the node does not work

It doesn’t matter, downtime disqualification is not active. A friend of mine had his node offline for weeks and it still worked just fine. (Though I don’t recommend doing that) The downtime measurement system is being redesigned currently.

The disqualification can only happen due to data not being accessible. I’m 100% sure about that.

What I’m quite sure about, but not 100%, is that being offline wouldn’t lead to the error you mention either. Locking issues on the db’s are usually disk IO related. But without the full error it’s a little hard to say. It would be worth checking the file system for errors. Perhaps run some SMART tests on your HDD. Though you mentioned you tried several HDD’s. It’s still what most things are pointing to.

I’ve restarted my router several times while my node is running (either to update or to change settings or fix an issue). It never caused a problem on my node. It’s highly unlikely that having no internet for a few seconds or minutes would cause an issue unless it just happens to restart while you are offline. The node doesn’t like starting without a connection, but if it’s already running being offline briefly is no problem.

So i tested what you sayed that a few seconds or minutes offline with the node stsrted connected would be no Problem:
I instandly got a few 500 Errors in the web interface and via Console the Error “bandwithdb locking protocol” error showed up.
After a restart(i connected the node back to the internet after about 10 sec) with connection to the internet the node workes just fine as before.

I think that the dis. from the satalite came because my node wasen’t offline but did not responde because of that bandwithdb locking protocol error which was triggerd by the few seconds of no internet

I must say that the node run’s on a brand new install of debian and docker etc. is up to date
My Node Version is 1.4.2 i think but i must say that the “bandwithdb locking protocol” error is there since i started with the node (i joined at 0.12 i think) and since than i got multiple nodes because of disk fail or other problems and with the new node it isn’t better at all

Could you now post the full error then?

1 Like

when i type in docker exec -it storagenode /app/dashboard.sh that the first 3 lines are loaded as normal but than after about 5 to 10 sec. it display’s “bandwithdb locking protocol” and thats all i know and if i restart than the CLI dashboard is shown as normal

Ok, please have a look at your log to see what’s happening. The dashboard errors only apply to the dashboard and aren’t really relevant to the functioning of your node.

https://documentation.storj.io/resources/faq/check-logs

Please replicate the problem and then looks at the last 20 log lines as instructed there.

2020-06-11T09:26:36.705Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "3HCK5WHLFCNZ4ZZEHF5RDW4YYYWOCMGRZGMFGSU2XCK6AC357RSQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:26:46.951Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "3O43IXZ7GH7HXI6K7J2UNAXQGFCIJPBFHC6IF6ROLQXWSNHI6BAA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:26:57.323Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ETLQNKQKGGJQ7FNTA33SGGWY236OU7ZTLXVXWUC7NSGO4UJP2H4A", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:07.417Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ETZBBGEYKC3RZBUBWVUMB3Y55ZJZSCBNMFAGPIC3X4PJVBVY2CKA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:18.149Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "MXYZBAJ72IGTMTXRUCV5LOKRYGGRPR5CD2A2QRAHAPTPK3KB2BCQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:28.350Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "N5ZOHA6FDTMWKSTWXF6BUIGZSCMXGNHCUAAOQZ2BBBXBCKHKUAMA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:38.533Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "NBQB5IOZWRO2SPO4ELU6KNGKAM3OMGPSDDWAQP2E6XZB4YOD42HQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:48.873Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "NOHNWEZZ5724M2CVAUVD5BLY74CW754OM5YSNXIG453VWJ7H7S7Q", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:27:59.556Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "WGNXHLK45VC6XCLDPEAKYFGPOM4FHOOPDS5W3BX42XMQG5OBZKDQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:28:09.732Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "54EIQFQ52IVVGISOSBMHK2QZNAAJVQKXUAKB66LSZZXCF3JMRE3Q", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:28:20.407Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SIC247B47NUFFLPGJABB724CSJOPNPBSBMBRSG5Z6Z5QAWQVEOHA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:28:30.651Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SQGEU3ZHPWYTGY647D2IJCUZ4MAA64ICEDR7TJLIZMAA34Z22QUQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:28:40.911Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SZBCXXAMDOCJFVTMFDKYVCFXL6ICW3HTDTZCSTPHK6JQMCS2KZCA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:28:51.023Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "T27D2CUU73IFL4E7LD4TKTXYOZVKNWULOHBRJFLCAP7JAMZWNVXA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:29:01.800Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "QQCJARE64XRCX2WCPBND3XJQJPNEVFZUOVOIG2JBYJSEKOUPKKWQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:29:11.975Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "QUOWRYA3PUAQ6653A5MPEIWJTYZDMUTPOSZKQZSBTBOJ4Y6E2RSQ", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:29:22.512Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "XH4W7KU26QKBTYENYALYQ4L6SOAX4GKPQIBNLSRUM3YISGLZWBNA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:29:32.795Z        WARN    retain  failed to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "XRGTEVCM3VBLKOMEYWM7RV6G5XK3OW6DQWAKSHNFMCMHGZH7MEGA", "error": "pieces error: piece expiration error: locking protocol", "errorVerbose": "pieces error: piece expiration error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:325\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces.func1:388\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces.func1:480\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:706\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:649\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:609\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:258\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:468\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:364\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:29:42.952Z        ERROR   retain  retain pieces failed    {"error": "retain: locking protocol", "errorVerbose": "retain: locking protocol\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:408\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:220\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-06-11T09:40:36.076Z        ERROR   gracefulexit:chore      error retrieving satellites.    {"error": "satellitesdb error: locking protocol", "errorVerbose": "satellitesdb error: locking protocol\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:103\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:115\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:54\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

That´s the last 20 Lines
So now when i restart the Node everything is just fine!
But i know that the node somehow fails when the Webinterface don´t load properly and some 500 Errors come

I also now checked the Drive and the Mounting … and nowere is a error or something

Also don´t tell me that downtime dis. is disabled because now (the node had the error for about 1 day) i am suspended from 4 of 6 Satalites (europe-north, europe-west,us-central and asia-east)

Also i get locking protocoll error´s from multible DB - Files of Storj and that is not my fault because the node run´s under root and all files and dirs are chmod 777

Also if i have no internet the drive still is mounted because it is now mounted via SATA to USB - Converter and than via USB3 pluged in

Well, it is.

Your node seems to be online(checking in with satellites) but not responding to audits. Please try checking your database files for errors.

Yes that is just what i told you!

But how could they be malformed when i never chanced anything and since i startet the node via docker run command did nothing???

Power outage, unclean shutdowns, memory errors, faulty HDD, file system errors and even bad PSU could cause issues on HDD’s. Who knows really, let’s first find out if that is an issue, then worry about the possible causes.

Also make sure no other processes are using these db’s and not more than one storagenode process is running on these db’s.

What OS are you on?

Power outage --> I have a UPS installed which can power the Server for about 12 hours
unclean shutdowns --> The Server run´s non-stop without poweroff or reboot´s (ex. reboot´s for installation of new updates from the OS but then i befor stop the node and then reboot the System)
memory errors --> no errors where displayed
faulty HDD --> i use a brand new WD-Red 3TB so no
file system errors --> no errors where displayed

The Server is a Server which just run´s the Node and nothing else so no no other processes are using these db files

I am using Debian 10 with newest updates (apt-get update && apt-get upgrade run´s all 10 Min. in a cron tab but aren´t allowed to restart because it run´s under a user which don´t have rights to reboot)

I also checked the DB-Files as descr. in your linked article and from all Files i get a “ok” so nothing is wrong with DB-Files

Well, the first 2 weeks or so are actually a much higher risk of failure. But my main point was that we need to look at the db files first and since they don’t seem to have a problem, there is no point in going down this road further.

Just to check though, that 3TB HDD, is that a WD30EFAX? That’s the new SMR model and if possible should be avoided.
I’m not saying it’s causing this issue, but just informing you in case you could still exchange it.

I’m a little out of ideas. Maybe someone else can help you along further.

Yes i know that this is an SMR - Model but i just get it because a friend of mine gave it to me for $40 because he want to use the disk in raid but that opinion is bad and so he just gave it to me.

I just use it to make some money with the node and then Upgrade to a Storageserver

Yes me too. The thing is that it takes me about 3 to 5 hours to notice that the node is “online but in some way down” and than i get dis. from Sattalites for a thing i don´t know why it happened.
Also the error cames realy realy per. every 24 to 30 hours.

“locking protocol” is this error from SQLite: https://www.sqlite.org/rescode.html#protocol

This ought to be a very hard error to get, unless perhaps there is some other process on the machine is holding a lock to the db in a way that violates the SQLite protocol. Or, perhaps, the filesystem layer is doing something unexpected that makes SQLite think that its locking protocol is violated. Are the database files mounted into the docker container via a bind mount, or are they on the docker overlay volume? What type of filesystem is being used?

If no one else is having these errors, it seems likely to be something specific to your setup. But there may be other nodes having this error. Maybe we need to wait to find that out.

Mounted via bind into node

Debian 10 with all settings on standard so i think debian 10 uses Fat32?

That another process want’s to open the files is at 0 because only the node and nothing else runs on this Server

Filesystem errors weren’t displayed

Could it be that i have to install sqlite on my host server or somethimg like that?

Please, show the result of the command:

blkid

runed in the container or on the host pc?