Ok, so after struggling for a while I decided to do a rip and replace. So, I have updated my node to a newer Linux and Docker versions:
storj@pine64 : ~ $ uname -a
Linux pine64 5.4.43-sunxi64 #20.05.2 SMP Tue Jun 2 17:20:17 CEST 2020 aarch64 aarch64 aarch64 GNU/Linux
storj@pine64 : ~ $ docker --version
Docker version 19.03.12, build 48a6621
I’ve now removed the experimental flag from docker and everything has started up fine. However, I think because I’ve been down for so long I think I have been locked out. I am seeing the following error when starting my node:
2020-08-03T01:03:59.266216306Z 2020-08-03T01:03:59.265Z INFO Configuration loaded {“Location”: “/app/config/config.yaml”}
2020-08-03T01:03:59.274312022Z 2020-08-03T01:03:59.273Z INFO Operator email {“Address”: “galewis@yaddatech.com”}
2020-08-03T01:03:59.274641657Z 2020-08-03T01:03:59.274Z INFO Operator wallet {“Address”: “0x0000000000000000000000000000000000000”}
2020-08-03T01:04:10.431166292Z 2020-08-03T01:04:10.430Z INFO Telemetry enabled
2020-08-03T01:04:10.508318495Z 2020-08-03T01:04:10.507Z INFO db.migration Database Version {“version”: 42}
2020-08-03T01:04:11.454759799Z 2020-08-03T01:04:11.454Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2020-08-03T01:04:12.346559907Z 2020-08-03T01:04:12.345Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2020-08-03T01:04:12.347000421Z 2020-08-03T01:04:12.346Z INFO bandwidth Performing bandwidth usage rollups
2020-08-03T01:04:12.348359297Z 2020-08-03T01:04:12.347Z INFO Node 12mYxtBsxSKpbZrh1bwZ9kMrrPx2W9SxDu62qrJWaFHcsCb4xkV started
2020-08-03T01:04:12.348475343Z 2020-08-03T01:04:12.348Z INFO Public server started on [::]:28967
2020-08-03T01:04:12.348502760Z 2020-08-03T01:04:12.348Z INFO Private server started on 127.0.0.1:7778
2020-08-03T01:04:12.349881054Z 2020-08-03T01:04:12.349Z INFO trust Scheduling next refresh {“after”: “6h29m52.79753552s”}
2020-08-03T01:05:10.021706761Z 2020-08-03T01:05:10.021Z INFO orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW sending {“count”: 1}
2020-08-03T01:05:10.021901851Z 2020-08-03T01:05:10.021Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 169}
2020-08-03T01:05:10.021934227Z 2020-08-03T01:05:10.021Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {“count”: 38}
2020-08-03T01:05:10.022233820Z 2020-08-03T01:05:10.021Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 79}
2020-08-03T01:05:10.025392170Z 2020-08-03T01:05:10.024Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {“count”: 4}
2020-08-03T01:05:10.028893406Z 2020-08-03T01:05:10.021Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {“count”: 14}
2020-08-03T01:05:10.420367105Z 2020-08-03T01:05:10.419Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2020-08-03T01:05:10.513063886Z 2020-08-03T01:05:10.512Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2020-08-03T01:05:10.568901911Z 2020-08-03T01:05:10.568Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2020-08-03T01:05:10.734171041Z 2020-08-03T01:05:10.733Z INFO orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW finished
2020-08-03T01:05:10.750544937Z 2020-08-03T01:05:10.750Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2020-08-03T01:05:10.931540234Z 2020-08-03T01:05:10.931Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2020-08-03T01:05:20.944215747Z 2020-08-03T01:05:20.943Z ERROR orders archiving orders {“error”: “ordersdb error: database is locked”, “errorVerbose”: “ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).archiveOne:238\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive:202\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches.func2:238\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches:262\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func1:189\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
Is there anything I can do to fix this? I haven’t seen any email indicating that I had been locked out.