Problems after suspension

Hi everyone,
I’m here asking you for help because my node is behaving weirdly after being suspended. First here’s the full story.
I had two nodes back at home and decided to bring one with me when I moved for uni. What I didn’t realize is that my internet connection was behind a NAT so no port forwarding. I called up my network provider and they gave me a public IP straight away and for free so that was good. However it did take me the better part of a month to get it done so the node got suspended.
I now have restarted the node, got emails that I was suspended on all satellites which was to be expected. However I’m now getting traffic only from one satellite (ap1) and nothing from the others. I’ll attach the logs of the first time I started the node back up (without a public IP so no traffic) and after figuring out the port forwarding.

Before port forwarding

2021-09-06T19:44:02.859Z        FATAL   Unrecoverable error     {"error": "storagenode configuration already exists (/app/config)"}
2021-09-06T19:51:09.643Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-09-06T19:51:09.904Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-09-06T19:51:09.904Z        INFO    Operator wallet {"Address": "*****"}
2021-09-06T19:51:10.472Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-09-06T19:51:10.995Z        INFO    db.migration    Database Version        {"version": 53}
2021-09-06T19:51:11.667Z        WARN    trust   Failed to fetch URLs from source; used cache    {"source": "https://www.storj.io/dcs-satellites", "error": "HTTP source: Get \"https://www.storj.io/dcs-satellites\": x509: certificate has expir>
2021-09-06T19:51:11.681Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-09-06T19:51:31.689Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "rpc: dial tcp 35.202.157.8:7777: i/o timeout", "errorVe>
2021-09-06T19:51:31.689Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "rpc: dial tcp 34.102.123.130:7777: i/o timeout", "errorV>
2021-09-06T19:51:31.689Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "error": "rpc: dial tcp 34.123.107.76:7777: i/o timeout", "errorVe>
2021-09-06T19:51:31.690Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "rpc: dial tcp 34.141.33.112:7777: i/o timeout", "errorV>
2021-09-06T19:51:31.690Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "rpc: dial tcp 34.92.229.119:7777: i/o timeout", "errorV>
2021-09-06T19:51:31.690Z        ERROR   preflight:localtime     unable to get satellite system time     {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "rpc: dial tcp 34.88.199.144:7777: i/o timeout", "errorV>
2021-09-06T19:51:31.690Z        ERROR   Failed preflight check. {"error": "system clock is out of sync: system clock is out of sync with all trusted satellites", "errorVerbose": "system clock is out of sync: system clock is out of sync with >
2021-09-06T19:51:32.821Z        FATAL   Unrecoverable error     {"error": "system clock is out of sync: system clock is out of sync with all trusted satellites", "errorVerbose": "system clock is out of sync: system clock is out of sync with >
2021-09-06T19:51:40.818Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}

After port forwarding

2021-11-06T17:55:15.335Z        FATAL   Unrecoverable error     {"error": "Error creating tables for master database on storagenode: migrate: sql: transaction has already been committed or rolled back\n\tstorj.io/private/dbutil/txutil.WithTx>
2021-11-17T13:38:03.378Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-11-17T13:38:03.435Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-11-17T13:38:03.435Z        INFO    Operator wallet {"Address": "*****"}
2021-11-17T13:38:04.190Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-11-17T13:38:04.271Z        INFO    db.migration    Database Version        {"version": 53}
2021-11-17T13:38:05.035Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-11-17T13:38:05.775Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-11-17T13:38:05.776Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-17T13:38:05.776Z        INFO    trust   Scheduling next refresh {"after": "3h16m3.250616407s"}
2021-11-17T13:38:05.776Z        INFO    Node ***** started
2021-11-17T13:38:05.781Z        INFO    Public server started on [::]:28967
2021-11-17T13:38:05.781Z        INFO    Private server started on 127.0.0.1:7778
2021-11-17T13:38:06.969Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "VRO2NX4GSWM7S5G3PPR62BHSR26U2IZZ7UJ2KSU4ROSUTPRFJWNQ"}
2021-11-17T13:38:07.048Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "K7UL7BCT3IKQHRIVDEX66L45TZKXGIYFXF6VW2GG6FNBFGAVXVQQ"}
2021-11-17T13:38:07.126Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "WRHLV3QZHMIJFYBE32SNWG4YUXEFSTVJQ264ZIS4QOWA3LUOZAMA"}
2021-11-17T13:38:07.154Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XGSDLFLKKI5LRJ57BIPANDZVCPBZHL7WH6HZ4UBQZKBZ4Z52EQLQ"}
2021-11-17T13:38:07.289Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "YQGEHIA2WWXLTJVQ5NXMBTXAKOQ5TF4VKNFVXVGSNE5MYHN7IO3Q"}
2021-11-17T13:38:07.354Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "MFIOFY4HJUDXWJIHTXTAY74XF5GBJ4RVD2PB5YRHYTEY6BNKQRHQ"}
2021-11-17T13:38:07.489Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "IAIFNTBZPD5UP57QSQRKB2YQJY76KUQRJJEFMXEYOL4MHQWTJESQ"}
2021-11-17T13:38:07.522Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "C4GF6D7CX6PYFVN725BI4VKOYSQAPQOX7DH6BIUPIRKPCQWNEAJQ"}

LOTS OF DELETE EXPIRED

2021-11-17T13:38:11.046Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "2KLY2MK7NCEFWXICIEW4TEKSRRSMMKKPRWRCJVSOOTQ4BIVP3WLQ"}
2021-11-17T13:38:11.068Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to dial storage>
2021-11-17T13:38:11.100Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ODH3U5LR6MX6QZX32P6LJWK6O4XAGXIBJP2YYZDKV6FBEBGFONUA"}
2021-11-17T13:38:11.103Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "VR7IGUZI6BLAG6ZEYU3NBOQC64H5INHNNNJJ2TKKEZN2IKR5S4WQ"}
2021-11-17T13:38:11.122Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Error": "contact: failed to dial storage>
2021-11-17T13:38:11.156Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "N66L5ZRR3JOR5BHIZI7DJVNDM6C72PZ4LNOFXES5VA75TUFZ2PPA"}
2021-11-17T13:38:11.175Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "MQDVSPPXQBF2M47VYIQ57W6ECPPIR4QWBAUOJOX5VTPXJP6B6CQA"}
2021-11-17T13:38:11.237Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "F6TYZ3RWVM7IAI4LNYPNOES34R4WB2Q45AAPAECLJC44REL3GI5A"}
2021-11-17T13:38:11.259Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "CN3EJYPZZA3GWRMAVVJ2B7EA4XPEXXO6KDBKC7EMBRPMZENPYXXQ"}
2021-11-17T13:38:11.277Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "O4SQTE5CVGOMIPOWOUHF26O3PNRVVSMQYV6KDBEN34FR5F4OXTCA"}
2021-11-17T13:38:11.358Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "2AQMCE5KRTRONTM3JTI2KIMLOYA7G4P3UQIWCIINODMVR3HFY2PA"}
2021-11-17T13:38:11.493Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "MTPTQ2JFRPGLFEK4QVIU4I6JOQ6UEW2EZJVD5NDHEIMPI4TDNE2Q"}
2021-11-17T13:38:11.540Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Error": "contact: failed to dial storage >
2021-11-17T13:38:11.606Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "VINEUFCZ42SORPXTXNBRU4LH6I26GUTVTZDVGJR6QAXUKRAG4YXQ"}
2021-11-17T13:38:11.676Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to dial storage>
2021-11-17T13:38:11.712Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "DNNYANA2WF4WGCIZNMSXR322Y7OFQFPQ327LG566H3GXEZFIO3RQ"}
2021-11-17T13:38:11.734Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ZEIURWMLD6TLCGEAX6BQFLOVU5RJDSCPC6HZQQAPSJDOSNBZH6FQ"}
2021-11-17T13:38:11.740Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "UYB3KX4BN62Z6QM72D6FDAK4T46LFQJQEKIEOHVYKEJHHEVIABMA"}
2021-11-17T13:38:11.812Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to dial storage >
2021-11-17T13:38:12.725Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to dial storage>
2021-11-17T13:38:12.778Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "DUX6GL4JVVPOYOPDNFNY6IL7NUHQTLAEXERH4QHPXZGV2JRVWTMQ"}

LOTS OF DELETE EXPIRED

2021-11-17T13:38:18.191Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "45NUP7U5TW665MC7LIMFQCRP7YN657KE3JPCUR2QEWPQHMMZRL7Q"}
2021-11-17T13:38:18.194Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "FN3PX4Y73B7CQ7N5WPSKKV5SHS3QSMMHYW6NSBJWGMNQRGMWS3FQ"}
2021-11-17T13:38:18.247Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      sending {"count": 187}
2021-11-17T13:38:18.247Z        INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       sending {"count": 7}
2021-11-17T13:38:18.246Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "KD5CP42GNJEBSXDHOMBUORMYOJGVONATQWNDOY5Q6ZD623TJ2SBA"}
2021-11-17T13:38:18.247Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       sending {"count": 25}
2021-11-17T13:38:18.247Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      sending {"count": 66}
2021-11-17T13:38:18.247Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      sending {"count": 395}
2021-11-17T13:38:18.247Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      sending {"count": 2085}
2021-11-17T13:38:18.313Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "FT2Q7YGETEWRWM53SNI4OV5NUXBDWQ6K4NDQU7YWAS6UCFECPOCQ"}
2021-11-17T13:38:18.322Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "AQKGBOQLXZYQC32JAVOP3EMVOXTJV7UJM44RIVFUDQQ3WLROEAWQ"}
2021-11-17T13:38:18.374Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XJGBISVEBMPYPTAJCEZS43V6SIPKJ3LIPIIAYYF4WWDXLR6DSUSQ"}
2021-11-17T13:38:18.433Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      finished
2021-11-17T13:38:18.461Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      finished
2021-11-17T13:38:18.663Z        INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       finished
2021-11-17T13:38:18.914Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       finished
2021-11-17T13:38:19.080Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "CRDUQ6PFO6GJEC5QQPGXJC6MYPOQFUNZKEXO5IN6SUW4ZQHURGZA"}
2021-11-17T13:38:19.202Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "OW3XTQVQMO6CGBLIVVBWXR7OUSMLOIZUF4DHBN5WPSYQZR7LJ5MQ"}
2021-11-17T13:38:19.220Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      finished
2021-11-17T13:38:20.188Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      finished
2021-11-17T13:38:20.474Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      sending {"count": 53}
2021-11-17T13:38:20.474Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      sending {"count": 345}
2021-11-17T13:38:20.474Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      sending {"count": 49}
2021-11-17T13:38:20.474Z        INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       sending {"count": 1}
2021-11-17T13:38:20.474Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       sending {"count": 30}
2021-11-17T13:38:20.474Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      sending {"count": 49}
2021-11-17T13:38:20.585Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      finished
2021-11-17T13:38:20.677Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      finished
2021-11-17T13:38:20.854Z        INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       finished
2021-11-17T13:38:21.152Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       finished
2021-11-17T13:38:21.364Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      finished
2021-11-17T13:38:21.400Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      finished
2021-11-17T13:38:21.404Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "P3RWZXGPKBIKGZKUTFC76KDKM4JSTWKL2BJFYSZN6UVNHNV3DRWQ"}

LOTS OF DELETE EXPIRED AGAIN

2021-11-17T13:39:40.739Z        INFO    collector       collect {"count": 2669}
2021-11-17T13:48:21.372Z        INFO    Got a signal from the OS: "terminated"
2021-11-17T13:48:21.422Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore>
2021-11-17T13:48:21.466Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore>
2021-11-17T13:48:21.467Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore>
2021-11-17T13:48:21.501Z        ERROR   piecestore:cache        error getting current used space:       {"error": "context canceled"}
2021-11-17T13:48:44.503Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-11-17T13:48:44.561Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-11-17T13:48:44.561Z        INFO    Operator wallet {"Address": "*****"}
2021-11-17T13:48:45.253Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-11-17T13:48:45.401Z        INFO    db.migration    Database Version        {"version": 53}
2021-11-17T13:48:46.011Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-11-17T13:48:46.809Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-11-17T13:48:46.809Z        INFO    Node ***** started
2021-11-17T13:48:46.809Z        INFO    Public server started on [::]:28967
2021-11-17T13:48:46.809Z        INFO    Private server started on 127.0.0.1:7778
2021-11-17T13:48:46.810Z        INFO    trust   Scheduling next refresh {"after": "8h39m6.818440478s"}
2021-11-17T13:48:46.811Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-17T13:48:51.955Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to dial storage>
2021-11-17T13:48:52.145Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Error": "contact: failed to dial storage>
2021-11-17T13:48:52.495Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Error": "contact: failed to dial storage >
2021-11-17T13:48:52.615Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to dial storage>
2021-11-17T13:48:52.840Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to dial storage >
2021-11-17T13:48:53.415Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to dial storage>
2021-11-17T13:52:31.998Z        INFO    Got a signal from the OS: "terminated"
2021-11-17T13:52:32.092Z        ERROR   piecestore:cache        error getting current used space:       {"error": "context canceled"}
2021-11-17T13:52:41.211Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: context canceled; context canceled; context canceled; context canceled; context canceled; context canceled; context canceled; context ca>
2021-11-17T13:52:43.882Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore>
2021-11-17T13:52:43.882Z        ERROR   pieces:trash    emptying trash failed   {"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore>
2021-11-17T13:53:25.265Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-11-17T13:53:25.289Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-11-17T13:53:25.289Z        INFO    Operator wallet {"Address": "******"}
2021-11-17T13:53:25.920Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-11-17T13:53:26.004Z        INFO    db.migration    Database Version        {"version": 53}
2021-11-17T13:53:26.545Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-11-17T13:53:27.318Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-11-17T13:53:27.318Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-17T13:53:27.320Z        INFO    Node ***** started
2021-11-17T13:53:27.320Z        INFO    Public server started on [::]:28967
2021-11-17T13:53:27.320Z        INFO    Private server started on 127.0.0.1:7778
2021-11-17T13:53:27.322Z        INFO    trust   Scheduling next refresh {"after": "5h39m57.328225098s"}
2021-11-17T14:53:27.318Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-17T15:53:27.319Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-17T16:30:25.163Z        INFO    piecestore      download started        {"Piece ID": "76TOJPUWA5VVZKAQ4PCUVZQ3XPRUCFFI2AMD5N5HZEX554LIIO4Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-17T16:30:25.844Z        INFO    piecestore      downloaded      {"Piece ID": "76TOJPUWA5VVZKAQ4PCUVZQ3XPRUCFFI2AMD5N5HZEX554LIIO4Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}

the weird thing is that is was successfully settling orders and deleting pieces for other satellites until the last restart but I didn’t change anything in the config.
Now I just get downloads from ap1 and nothing else.
I just restarted the node and here are the logs since the restart:

2021-11-22T20:46:27.199Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-11-22T20:46:27.224Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-11-22T20:46:27.224Z        INFO    Operator wallet {"Address": "*****"}
2021-11-22T20:46:27.937Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-11-22T20:46:28.019Z        INFO    db.migration    Database Version        {"version": 53}
2021-11-22T20:46:28.666Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-11-22T20:46:29.367Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-11-22T20:46:29.368Z        INFO    Node ***** started
2021-11-22T20:46:29.368Z        INFO    Public server started on [::]:28967
2021-11-22T20:46:29.368Z        INFO    Private server started on 127.0.0.1:7778
2021-11-22T20:46:29.368Z        INFO    trust   Scheduling next refresh {"after": "3h55m3.704248287s"}
2021-11-22T20:46:29.368Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-22T20:52:31.176Z        INFO    piecestore      download started        {"Piece ID": "RB2SSZMDD4PPYT4VXHPGWV54AZYAQJMTVP7PPDM67CYIHRDAZ3YA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T20:52:32.292Z        INFO    piecestore      download canceled       {"Piece ID": "RB2SSZMDD4PPYT4VXHPGWV54AZYAQJMTVP7PPDM67CYIHRDAZ3YA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T20:53:31.448Z        INFO    piecestore      download started        {"Piece ID": "3MKLPGUQDY3XRNL3DTKBIGA5WEYTGLKF7UPTNTJNYQ2E5KAVYH7Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T20:53:31.984Z        INFO    piecestore      downloaded      {"Piece ID": "3MKLPGUQDY3XRNL3DTKBIGA5WEYTGLKF7UPTNTJNYQ2E5KAVYH7Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T20:56:16.881Z        INFO    piecestore      download started        {"Piece ID": "NRLI7UCPFBTWJIFPAGQ6XID4HANQ2PSEOYFVNXL6HNWLJ4QMJRQQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T20:56:17.282Z        INFO    piecestore      downloaded      {"Piece ID": "NRLI7UCPFBTWJIFPAGQ6XID4HANQ2PSEOYFVNXL6HNWLJ4QMJRQQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:05:50.893Z        INFO    piecestore      download started        {"Piece ID": "3MJQPQNHQN3MYTW7BKUKHDGTKRW2RS4E4KXWQ6S7ZRBVF3UPDO6A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:05:51.482Z        INFO    piecestore      downloaded      {"Piece ID": "3MJQPQNHQN3MYTW7BKUKHDGTKRW2RS4E4KXWQ6S7ZRBVF3UPDO6A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:20:02.214Z        INFO    piecestore      download started        {"Piece ID": "GXRYIGAYFGM5CHTZT4GNT264UKXJH3A67ELH4SIYSJ4JJBZDIIAA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:20:02.937Z        INFO    piecestore      downloaded      {"Piece ID": "GXRYIGAYFGM5CHTZT4GNT264UKXJH3A67ELH4SIYSJ4JJBZDIIAA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:21:02.098Z        INFO    piecestore      download started        {"Piece ID": "47JVHSNZ25BH5GITIRNIW2E4APQK6ZPEI7OTAMQLGJIENRUWWMFA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:21:02.484Z        INFO    piecestore      downloaded      {"Piece ID": "47JVHSNZ25BH5GITIRNIW2E4APQK6ZPEI7OTAMQLGJIENRUWWMFA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:31:17.720Z        INFO    piecestore      download started        {"Piece ID": "VH6QK6YZT3ZN5KQTGT6HVWNF3VNNJNQIKKVULJRV7NWNII4H5PMA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:31:18.289Z        INFO    piecestore      downloaded      {"Piece ID": "VH6QK6YZT3ZN5KQTGT6HVWNF3VNNJNQIKKVULJRV7NWNII4H5PMA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:33:29.776Z        INFO    piecestore      download started        {"Piece ID": "RFUDAZ6R3PPLC46MN2NYEVAAAKZIXQ3MISCQEXGRNS2DJ6KW2AFA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-11-22T21:33:30.126Z        INFO    piecestore      downloaded      {"Piece ID": "RFUDAZ6R3PPLC46MN2NYEVAAAKZIXQ3MISCQEXGRNS2DJ6KW2AFA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}

Also here’s what the dashboard says:

Online score on ap1 has been growing steadily over the last few days.

So if anyone has an idea on what could cause this I’m open for suggestions. I looked around on the forum but didn’t find anything.

Thanks a lot !

Since online score is zero, there should be only GET_AUDIT from other satellites.
However, this WARN is bother me:

This makes me believe that you have had an issue with the underlaying OS, like not updated root certificates.
I would like to suggest to update your OS with the latest available updates.
It also looks like your storagenode is outdated. Please, update it and configure automatic updates

Please, remove the trust-cache.json file (it’s in the data location), and restart the node.

1 Like

Hi Alexey,
thanks a lot for the quick reply. I updated my OS as suggested and also updated the storagenode but I still get the same behavior. More weirdly now I can’t establish even with ap1 which used to work before the restart.
What I might try to do is spinning up a new node (which I wanted to do anyways) and see how it behaves.
Here are the logs since the restart:

2021-11-23T16:34:45.017Z        INFO    db.migration    Database Version        {"version": 53}
2021-11-23T16:34:45.636Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-11-23T16:34:46.435Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-11-23T16:34:46.436Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-11-23T16:34:46.436Z        INFO    Node ***** started
2021-11-23T16:34:46.436Z        INFO    Public server started on [::]:28967
2021-11-23T16:34:46.436Z        INFO    Private server started on 127.0.0.1:7778
2021-11-23T16:34:46.437Z        INFO    trust   Scheduling next refresh {"after": "7h15m48.057168696s"}
2021-11-23T16:34:46.561Z        ERROR   contact:service ping satellite failed   {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:138\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-11-23T16:34:46.795Z        ERROR   contact:service ping satellite failed   {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:138\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-11-23T16:34:51.765Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}
2021-11-23T16:34:52.228Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}
2021-11-23T16:34:52.475Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}
2021-11-23T16:34:52.791Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}
2021-11-23T16:34:53.040Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}
2021-11-23T16:34:53.468Z        WARN    contact:service Your node is still considered to be online but encountered an error.    {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Error": "contact: failed to dial storage node (ID: 1D3g9dPdep3DytdSDZLSHcSM6q36JwsKj963cuMaCtpV9bxUc4) at address 93.51.1.212:28967 using QUIC: rpc: quic: timeout: no recent network activity"}

Could you elaborate on the root certificates ? I looked it up online but couldn’t figure out what to do.

At the moment only wait. It should check-in on all satellites and start working.
You can also fix the issue with QUIC:

If you use some Linux distributive, it’s coming with root trusted certificates (ca-certificates package in Debian for example), so when you install updates to your OS, the root certificates should be updated too.

Do you think it will check in with the satellites by itself ?
It seems to be running fine on ap1 for the moment and the online score is up to 25%
Another weird think is that I got suspended on all satellites (got the email when I started the node back up) but the dashboard still reports a suspension score of 100%
I’ll let it run and keep you updated if something changes. Thanks for the help !!

The suspension score should be 100%, if it’s lower than 60% - your node will be suspended because of unknown audit errors (not for low online score).
The purpose of suspension score is to make all unknown audit errors known without quick disqualification and it’s unrelated to online score.
So there are three independent scores:

  • suspension score is to protect against unknown audit errors (GET_REPAIR failed because of invalid signature for example), all other will affect the audit score directly, if this score would fall below 60%, your node will be suspended and stop to receive any ingress until it will grow above 60%;
  • online score is measure how good your online uptime, if it would fall below 60%, the node will be suspended;
  • audit score is measure how healthy data on your node, if it would fall below 60%, the node will be disqualified.

Nice thanks for the clarifications !

1 Like

So now the question is, once a node is back online, Mine was off due to a dns issue I didnt notice while out of town for three weeks, how long does it take for the online score to recover? Mine is currently 15%, and at what point is the suspension removed. Also is the removal automatic or do the gods of storj need to take an action to unsuspend?

from my understanding online score should be back up to 100% after a month of uptime and the node is unsuspended at 60% online score

1 Like

Too bad it take so long to recover. Seems like an overly severe penalty in my opinion

generally there isn’t much in regard to penalties so far as i know, only penalty a node can get is no ingress and that should quickly recover after the node starts operating correctly again.

the online score will look terrible for a long time tho, but thats just because of how its basically the avg online time over the last 30 days.

i don’t think 60% online score is required for normal operation, but its possible… but that would mean it will take 20 days!! until one gets ingress again…

but ofc it does also take like 30 days of downtime to reach 0% online score…
the node will still earn just not grow during suspension.

and beyond 30days of downtime then node should get DQ, so one might be happy it didn’t end up getting DQ…

most likely the limit is still longer even tho storjlabs has said it was set to 30, they usually move slow when implementing such things.

Hi I’m back again.
The node reached a 60% online score on ap1 but still zero communication with the other 5 satellites. It’s as if it forgot about them. The other node I set up is working flawlessly so it shouldn’t be a network problem.
Is there any way to figure out what’s going on ? Here are the logs after a restart.

2021-12-07T14:49:00.726Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-12-07T14:49:00.741Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-12-07T14:49:00.741Z        INFO    Operator wallet {"Address": "*****}
2021-12-07T14:49:01.365Z        INFO    Telemetry enabled       {"instance ID": "*****"}
2021-12-07T14:49:01.452Z        INFO    db.migration    Database Version        {"version": 53}
2021-12-07T14:49:02.125Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-12-07T14:49:02.994Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-12-07T14:49:02.995Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-12-07T14:49:02.995Z        INFO    Node ***** started
2021-12-07T14:49:03.001Z        INFO    Public server started on [::]:28967
2021-12-07T14:49:03.001Z        INFO    Private server started on 127.0.0.1:7778
2021-12-07T14:49:02.996Z        INFO    trust   Scheduling next refresh {"after": "3h10m20.654425301s"}
2021-12-07T14:49:03.432Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "F6JNTU2LQXMZZSXFXDO7Z2M7TMWAXQO3C77CLXE7CXWEZ5UFXVEA"}
2021-12-07T14:49:04.168Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      sending {"count": 143}
2021-12-07T14:49:05.771Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      finished
2021-12-07T14:49:39.382Z        INFO    piecestore      download started        {"Piece ID": "ZEN3K2QK72DYJIGEUXBZBY42KTBJ5N6UQAVX2BVRIISBF5NCSOEA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-12-07T14:49:41.011Z        INFO    piecestore      downloaded      {"Piece ID": "ZEN3K2QK72DYJIGEUXBZBY42KTBJ5N6UQAVX2BVRIISBF5NCSOEA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}

What do you have in the trust-cache.json file? It should be near the config.yaml
For docker version it is in /app/config inside the container.

docker exec -it storagenode cat config/trust-cache.json

Please, show result of the command:

docker info

Hi Alexey, thanks for the swift reply.
trust-cache.json contains this:

{
  "entries": {
    "https://www.storj.io/dcs-satellites": [
      {
        "SatelliteURL": {
          "id": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S",
          "host": "us1.storj.io",
          "port": 7777
        },
        "authoritative": false
      },
      {
        "SatelliteURL": {
          "id": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs",
          "host": "eu1.storj.io",
          "port": 7777
        },
        "authoritative": false
      },
      {
        "SatelliteURL": {
          "id": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6",
          "host": "ap1.storj.io",
          "port": 7777
        },
        "authoritative": false
      },
      {
        "SatelliteURL": {
          "id": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE",
          "host": "saltlake.tardigrade.io",
          "port": 7777
        },
        "authoritative": false
      },
      {
        "SatelliteURL": {
          "id": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB",
          "host": "europe-north-1.tardigrade.io",
          "port": 7777
        },
        "authoritative": false
      },
      {
        "SatelliteURL": {
          "id": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo",
          "host": "us2.storj.io",
          "port": 7777
        },
        "authoritative": false
      }
    ]
  }
}

as for docker info it returned:

Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 3
  Running: 3
  Paused: 0
  Stopped: 0
 Images: 6
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version:
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.9.277-75
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: aarch64
 CPUs: 4
 Total Memory: 3.625GiB
 Name: odroid
 ID: TCGC:46NS:SZ5F:MHX2:FSEK:VKVU:CEN2:QMT4:D4O6:TD3W:O35E:GQSP
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

Do you have any other satellites in your logs?
Please, search for last 5 mentions of each (you need to have jq installed):

for s in `docker exec -it storagenode wget -qO - localhost:14002/api/sno | jq .satellites[].id -r`; do
  docker logs storagenode 2>&1 | grep $s | tail -n 5
done

Hi Alexey,
the command doesn’t return anything. The last mention of the other satellites I could find is the logs I included in my previous posts.
One interesting thing is that when I try to do a graceful exit it tells me I’m disqualified even though I didn’t get the disqualification email. I tried running the earnings estimation script and it told me the database was corrupt so that might have something to do with it.
Anyway I don’t think this node is worth the trouble given that is has hardly any held back amount and very little data stored. I’d leave it at that and delete the node unless you have some interest in investigating what happened to it.