Moved to docker, odd Log file errors and Saltlake audit value slowly decreasing

The saltlake audit has been decreasing slowly, was at 98% a week or so ago after moving windows gui to docker install on ubuntu. All other Satellites are 100% for this node.

Only other thing I can see is the odd error message in the logs about download cancelled
multiple
downloaded size (0 bytes) does not match received message size (8704 bytes)
one of
“unknown reason bug in code, please report”

Currently the audit for saltlake is 96.26%

I did the transfer process correctly I believe, with multiple robocopies/rsyncs onto the new formatted drive, which I should add is USB 4TB at the moment as an intermediary of getting the storj data from NTFS to ext4

I would have thought if it was anything hardware related it would be reflected on all satellites.

Anything to worry about here…

Cheers

2026-01-13T06:56:23Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "37P4UWRBAZ7KH2JQHMMZYSZDPBUGQWXEBZODMYIKQRHY6S63EIFA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 5120, "Remote Address": "79.127.205.225:43012"}
2026-01-13T06:56:23Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "AQGOALTROKJAJA3DNTL4HQMFVRF475Q2C6NIO5CR5T2ZP6CV2Q4A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 14336, "Remote Address": "109.61.92.86:37124"}
2026-01-13T06:56:25Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "JLBARG2WJPTJCDYFZOV2XDHNTSEZSWHBUWSBEKPTBU73USE7AXLQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 10240, "Remote Address": "79.127.219.44:52972", "reason": "downloaded size (0 bytes) does not match received message size (10240 bytes)"}
2026-01-13T06:56:29Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "ZIJTAJ6NWIXKNX45JISAAYZQCFSYD4JL4H5CCM5UOUQUZB6VVJMA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 7680, "Remote Address": "79.127.201.217:59974"}
2026-01-13T06:56:29Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "47YQTWI54I2HHCVH4234C2Q2OMN7GDXEYBSZ7TKDWTKMN5ZA37OA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.231:53486", "Size": 4608}
2026-01-13T06:56:31Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "X5XBI5D5MQHKW7JYAHLIIY46VKKRTF5G4DAIBVOS7ZFOHXCONOYQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.107:40568", "Size": 3584}
2026-01-13T06:56:32Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "GT4YG7LRUTOBMXHTGIGLIHNJRL574VCXVQSEZBRA2O427IHUTCRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 11776, "Remote Address": "79.127.205.236:38382"}
2026-01-13T06:56:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "7N33DO3UYYWOYM67AH4ED3UQI2WD2DXPHOCJJYO7LL2LF7J6RKBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8448, "Remote Address": "109.61.92.71:45870"}
2026-01-13T06:56:36Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "UTT5OC5PE3GRR427CXTZENKADYEZEUMF463S4UXJ3X3N2TBLH6XA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 13824, "Remote Address": "109.61.92.87:53748"}
2026-01-13T06:56:37Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "47ZZQFRCCYARUTQ5LDPBDBJTIRMKE2OQ37SPWYR5YRO2RZP5PQPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 7936, "Remote Address": "79.127.205.231:50964"}
2026-01-13T06:56:37Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "SO2YLZNSRGE2BWZWHEWM32QNEYWTC22LVKU4ELRALXHG3NJEJ6WQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 142336, "Remote Address": "109.61.92.75:49186"}
2026-01-13T06:56:40Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "H5NSXJIH2EZ2E7E5SZX7YIU36UJBA4LZNELMVKW7HXYVOPCXFJWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "79.127.226.104:34878", "reason": "unknown reason bug in code, please report"}
2026-01-13T06:56:43Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "4IIJSV5YCXQX2LHCQROAH3TJH5EQR34LBEC7D3DQRL2H363HTVRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 9472, "Remote Address": "109.61.92.71:45470"}

Audit is affected by actual lost files (look at GET_AUDIT failures) or failed repairs. The latter can recover. But you need to find out what up. Audit score 100% shall be normal state.

1 Like

96% for the audit score means that the node is disqualified.
If it’s an online score - that’s another story.

well its on 96.2% now so I guess I keep my fingers crossed overnight. Does this mean If it gets disqualified I should just close it down and write off the docker thing as a failed experiment? is that the end of it? seems a tad unfair when the other three satellites are more than fine with my data?

This is what i’m looking at

No finger crossing required. Look in the logs for the reason of failures.

Or it could be a glitch due to a very low traffic on this satellite, I would ignore it unless you see something in the logs. Actual customers use three other satellites.

Docker has nothing to do with this.

1 Like

96.03%

i have plenty of these get_audit lines

2026-01-13T06:40:59Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "466OONM76WLGQMIIDYR3CVHBO47APH4GBB4J2IPCFWCXRCKJXWZA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT", "Offset": 107008, "Size": 256, "Remote Address": "35.212.4.255:48306"}

nothing about errors or failures though

so pleasant surprise - still in the game at 96.14%

I have these messages in log

2026-01-09T17:11:10Z	INFO	reputation:service	node scores updated	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Total Audits": 7971, "Successful Audits": 7897, "Audit Score": 1, "Online Score": 0.99916130835896, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0}
2026-01-09T17:11:10Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "33VKQAXSVSTN5C2635UEDBPRUPGF3RH6L6YKY7ZS6BLBM2FKGJ7A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 8448, "Remote Address": "45.140.189.201:38850"}
2026-01-09T17:11:11Z	WARN	reputation:service	node scores worsened	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Total Audits": 642656, "Successful Audits": 640792, "Audit Score": 1, "Online Score": 0.9952838899226958, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": -0.004687461992437192, "Suspension Score Delta": 0}
2026-01-09T17:11:12Z	INFO	reputation:service	node scores updated	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Total Audits": 1126460, "Successful Audits": 1120835, "Audit Score": 1, "Online Score": 0.9990001089108991, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0}
2026-01-09T17:11:13Z	WARN	reputation:service	node scores worsened	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Total Audits": 558229, "Successful Audits": 556399, "Audit Score": 1, "Online Score": 0.9999282347797012, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": -0.0000428311323785735, "Suspension Score Delta": 0}
2026-01-09T17:11:13Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "T6UA2JRPR7P42MBRLAC5HEX7ODBGX7WWJDT7ZXW4AENKQ436NLVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "156.146.43.227:56246", "Size": 3072}

which seem to explain what’s happening but not why…
Other satellites are still on 100% Audit

as far as data is concerned Saltlake is only holding 18GB where as US1 is over 1TB, so from that POV I guess I shouldn’t lose any sleep…

Is saltlake a bit more temperamental maybe? can’t see how this can be anything my end when the other three are fine, but I stand to be corrected…

Can I upload a log file somewhere or not much point in that?

You can use this guide to troubleshoot:

ok further investigation shows the USB 4TB drive i’m using as an intermediary of getting the storj data from NTFS to ext4 is actually SMR, so i guess that explains it.

Currently sitting on 96.30% Audit for Saltlake, so slightly better - I will start the process of transferring the data back to the original CMR drive and keep my fingers crossed…

Is the copy process over the next few days likely to impact the Audit score in the negative and/or should I wait a little bit and see if I can get a bit more breathing room into the Audit score? or is it not likely to recover any?

Do you run your node in a new location while the copy process to it is not finished?
If so, please shutdown it immediately, otherwise it will be disqualified. And do not try to run it in an old location, now they are differ too much. You also must not run the copy process with the deletion option (when some files are missing in the source they will be deleted from the destination - you do not want a disqualification), you can now copy only missing files from the source to do not delete customers data which your node has received while it was running in the destination.

Thanks for the help Alexey, all the data was fully copied over to the new destination using rsync and node was shutdown correctly before running the final rsync. Everything went surprisingly well and node came online and it looked like everything was happy… that was around a week or so ago…

I am confident the data is/was as good as on the old installation… must be that SMR drive that has upset saltlake. Going to start copying the data back to the original drive now, hopefully the Audit will hang in there enough for this to complete.

In hindsight I probably should have just bought another drive so I only had the one data transfer to make… The time I waited this week making sure the new install was stable and working correctly has kinda worked against me in this case…

The problem could be the USB interface or USB cable - the data maybe already corrupted when you copied it.
If you have the source data, I would suggest to run a dry-run with hash checks. It should be comparison with the origin, not with an intermediate. Because this intermediate could be where it might be corrupted.

1 Like