Hello @Alexey , i receive the same error after 25 days of no problems.
I was checking my node almost everyday and was working, but now, after 23-25 days suddenly is not working.
My router was not restarted, i was not doing any changes on IP or something like this.
Here is my storagenode.log . Please check it and advise me how to proceed
PS C:\WINDOWS\system32> cat "$env:ProgramFiles/Storj/Storage Node/storagenode.log" -Tail 20
2019-12-26T16:00:04.845+0200 INFO db.migration Database Version {"version": 26}
2019-12-26T16:00:04.867+0200 INFO Node 12rWuyoZQRdCZGFXUupW7MbxxugMeTK98ot7rx2ZDZMbMjqocqk started
2019-12-26T16:00:04.867+0200 INFO Public server started on [::]:28967
2019-12-26T16:00:04.867+0200 INFO Private server started on 127.0.0.1:7778
2019-12-26T16:00:04.868+0200 INFO pieces:trashchore Storagenode TrashChore starting up
2019-12-26T16:00:04.868+0200 INFO contact:chore Storagenode contact chore starting up
2019-12-26T16:00:04.867+0200 INFO bandwidth Performing bandwidth usage rollups
2019-12-26T16:00:05.007+0200 INFO version running on version v0.27.1
2019-12-26T16:00:05.040+0200 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {"count": 2}
2019-12-26T16:00:05.040+0200 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {"count": 4}
2019-12-26T16:00:05.525+0200 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2019-12-26T16:00:06.044+0200 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2019-12-26T16:00:08.567+0200 INFO Interrogate request received.
2019-12-26T16:00:08.683+0200 INFO Stop/Shutdown request received.
2019-12-26T16:00:15.450+0200 ERROR piecestore:cacheUpdate error getting current space used calculation: {"error": "context canceled"}
2019-12-26T16:00:15.450+0200 ERROR piecestore:cacheUpdate error persisting cache totals to the database: {"error": "piece space used error: context canceled", "errorVerbose": "piece space used error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdateTotal:115\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:82\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:68\n\tstorj.io/storj/private/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:63\n\tstorj.io/storj/storagenode.(*Peer).Run.func6:445\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-12-26T16:00:16.687+0200 ERROR orders archiving orders {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).archiveOne:238\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive:202\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches.func2:213\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches:237\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func1:164\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-12-26T16:01:21.336+0200 ERROR bandwidth Could not rollup bandwidth usage {"error": "bandwidthdb error: database is locked", "errorVerbose": "bandwidthdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:300\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/storj/private/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/storagenode.(*Peer).Run.func9:454\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-12-26T16:02:13.641+0200 ERROR collector error during collecting pieces: {"error": "piece expiration error: context canceled", "errorVerbose": "piece expiration error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:473\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:55\n\tstorj.io/storj/private/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run:51\n\tstorj.io/storj/storagenode.(*Peer).Run.func4:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-12-26T16:02:17.481+0200 FATAL Unrecoverable error {"error": "bandwidthdb error: disk I/O error", "errorVerbose": "bandwidthdb error: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).getSummary:153\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Summary:111\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).MonthSummary:73\n\tstorj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:83\n\tstorj.io/storj/storagenode.(*Peer).Run.func2:433\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
PS C:\WINDOWS\system32>