C:\WINDOWS\system32>chkdsk f: /f /r /x
The type of the file system is NTFS.
Volume label is New Volume.
Stage 1: Examining basic file system structure …
8936960 file records processed.
File verification completed.
Phase duration (File record verification): 2.81 minutes.
142921 large file records processed.
Phase duration (Orphan file record recovery): 0.00 milliseconds.
0 bad file records processed.
Phase duration (Bad file record checking): 0.65 milliseconds.
Stage 2: Examining file name linkage …
5 reparse records processed.
8962672 index entries processed.
Index verification completed.
Phase duration (Index verification): 6.40 minutes.
0 unindexed files scanned.
Phase duration (Orphan reconnection): 4.52 seconds.
0 unindexed files recovered to lost and found.
Phase duration (Orphan recovery to lost and found): 20.97 milliseconds.
5 reparse records processed.
Phase duration (Reparse point and Object ID verification): 67.02 milliseconds.
Stage 4: Looking for bad clusters in user file data …
8936944 files processed.
File data verification completed.
Phase duration (User file recovery): 1.58 days.
Stage 5: Looking for bad, free clusters …
10912803 free clusters processed.
Free space verification is complete.
Phase duration (Free space recovery): 0.00 milliseconds.
Windows has scanned the file system and found no problems.
No further action is required.
2861570 MB total disk space.
2806594 MB in 8780924 files.
3551220 KB in 12858 indexes.
0 KB in bad sectors.
9093683 KB in use by the system.
65536 KB occupied by the log file.
43651212 KB available on disk.
4096 bytes in each allocation unit.
732562175 total allocation units on disk.
10912803 allocation units available on disk.
Total duration: 1.59 days (137605054 ms).
I don’t think there are any errors. Everything seems OK.
When i tried putting this option, the Storage node service did not want to start at all. Unfortunately it did not generate any log file, so i have no idea why exactly this happens. Now i commented it back and everything started.
this what i tried to use:
storage2.monitor.verify-dir-writable-timeout: 2m0s
when you hit start, it just blinks for less than a second and shows started and straight away goes to stopped. I don’t know what you mean by “activated” . The service is set to automatic. I rebooted the whole machine, but the result is the same. Anyway, no w it is working fine after i commented the setting you mentioned. I guess it is misused . Maybe the option needs to look in a certain way? Don’t know. But for sure something is wrong with it, because when i commented it everything was able to start in the normal way.
This is because the filewalker cannot finish its job before the node is stopped. If the disk is slow (and it’s, since it has failures because of timeout), it could take days before finish.
it has some good qualities (making a trash array out of random sized disks for example)…seems like just not enough interest in it, so the buggs have never really been worked out.
To clarify, you just want the “storage2.monitor.verify-dir-(read/)writable-timeout: 1m30s” set? leave the other ones disabled?
I guess the filewalker doesn’t track its progress and resume? It just resets to zero every restart till its allowed to fully finish (that seems inefficient)? Wouldn’t this take days on a large, healthy, node?
—update
Well most of the storj files have been moved to the cold storage array, and the node has stayed up for around 5 hours now. still moving files over tho, so i’ll report back once everything is moved around.
Yes, only needed ones. If your node stops because it cannot write a file before the writeable timeout, then you need to increase the value for this timeout. If you have stops after readable timeouts, then you need to increase a readable timeout and readability check interval (because they are equal by default to 1m0s). The writeable check interval should be increased, if your writeable timeout is changed to be longer than 5m0s (because the writeable check interval is 5m0s by default).
I hope that you mean copy, not move while your node is running. Or you need to use send/receive commands for BTRFS/ZFS or move between PV, if you use LVM.
Indeed it does not preserve the state of traversin directories across node restarts.
That’s right.
Indeed. IIRC, before the implementation of the lazy file walker, files were put into GC immediatelly during the scan, as opposed to first performing the scan, then removing files, so the impact was much smaller. I’ve asked about this in this thread.
Largest single nodes are probably around 20-25 TB. A good setup can scan 1 TB of files in around 10 minutes. Some more complex setups with SSD cache can do even less. This means few hours, but not days.
Unfortunately most setups discussed in this forum aren’t as good, and might may this days indeed.
My setup with docker/service on Windows (and NTFS) shows about 4 hours for a lazy filewalker at max (6TB).
With a usual filewalker it was less than a hour.
No optimisations, no SSD cache, databases on rusty HDDs, but yes, 32GB RAM… Not always available, but at least 16GB was definitely are available.
well, just to update. since i started the move over to the array (i chose this method, cuz unraid doesn’t care what pool/array the user share exists on, including multiple pools at once…its all treated as one folder, and the only difference one would see is slower/faster read/write speeds if the file was on a slower/faster pool/array. My hope (and so far confirmed) plan was for the STORj node to just continue as normal, and if the small chance of it looking at the same blob at the same time as it was being moved, i would take that risk…AND ITS PAID OFF SO FAR!
I moved the config and small folders over first, then started the long slog at 1.5 - 7MBs to move everything over. Once it had been running for half an hour or so, i started up the node again, and so far, the node is not rebooting anymore, and it seems the amount-used is climbing steady! Apparently the crashes were (mostly) my degraded and ballance-looping pool.
I have reached out to the devs of unraid regarding this, as it is a rare issue that has a few eyes trying to figure out why it doesn’t work as intended. The balance procedure refuses to ever remove the disk you asked it to remove, so it still emulating it, AND trying to rebuild the array with the removed disk at the same time. If i understand it right, it’s just shuffling chunks back and forth in circles!
by redundancy, you mean the ability to lose a drive and still have your data? thats exactly what it has! i even set it up so i can lose x2 drives and it will still be fine! not that it replaces backups…those are never replaced by redundancy.
And if i run the built in mover, it copies, verifies then deletes from the source. Im just doing it manually so i can control which folders go first (ie. the most to least important ones).
(but storj doesn’t pay enough for me to back-up their data…i hope one day it will tho!)
I do need to figure out a way to do this moving faster (snapshots don’t work, cuz the rest of my setup isn’t BTRFS any more). I just hope that once its on a healthy array, it will move back much faster to the new pool.
fair point as well. i feel a raid5 is enough so that a drive can go and i don’t loose the reputation by starting the node over. I think i need to stop trusting the GUI to do things properly as well, and do my maintenance from command line.
On that note, since moving most of my node to the (single drive speed) array, NO MORE ERRORS (still losing some races, but not half as many as before), no more reboots, and my used space is back up to 2.8TB…305.67GB overused! So, that rebuild loop seems to have been the reason this whole thing went north to start with!!
I am going to leave those timeout settings i changed alone (unless there is a good reason to change them back?).
hope to do a bit of tweaking over the winter to get this node WINNING even more!!! - Charlie Sheen.
One more question about this. Is it safe to delete “garbage”, “temp”, and “trash” folder contents (that will make my restore so much faster)?
No, it’s dangerous.
If you delete pieces from the trash folder your node may be disqualified, if the satellite submit a restore command and then audit will not find these pieces.
If you delete files from the temp folder which not older than 48h, your node will lose these transfers.
If you delete files from the garbage folder (normally it should be empty), you may lose some pieces and node will scream about it to the log. Potentially it may affect audit or suspension score.
if you delete folders itself, your node will not start anymore.
So, please, do not delete anything in the storage location unless someone from Storjlings would say to do so.
Hello! My node was running about 3 years. About 3 days ago node go offline every 4-6 hours.
Running on Windows
This lines appear right after STORJ node service turns off
VERSION v1.90.2
logs
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "IQFPFR3LQFVCJDGL2Q6KNABG6YQHGBIJRBRADCLIPJWIVWTF5SJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "5.161.217.169:38240"}
2023-11-08T16:40:18+03:00 INFO lazyfilewalker.used-space-filewalker starting subprocess {"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-11-08T16:40:18+03:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2023-11-08T16:40:18+03:00 ERROR pieces failed to lazywalk space used by satellite {"error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-11-08T16:40:18+03:00 INFO lazyfilewalker.used-space-filewalker starting subprocess {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-11-08T16:40:18+03:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2023-11-08T16:40:18+03:00 ERROR pieces failed to lazywalk space used by satellite {"error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-11-08T16:40:18+03:00 ERROR piecestore:cache error getting current used space: {"error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled {"Piece ID": "TNHBUQZ4A444OYJF2Q7O65SCCVEPYXBPAKMN5QARFVL3AQILU22Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 65536, "Remote Address": "185.24.9.91:49514"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "QYO5GP6XWA57KMP3NTWLAP6EQRREO72EL33GP4467FUBQNXUJVIQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 2310400, "Size": 0, "Remote Address": "195.201.241.85:45990"}
2023-11-08T16:40:18+03:00 ERROR piecestore error sending hash and order limit {"error": "context canceled"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "ZOM4VLAQSUIWTK2PA56CKF7C3G4AHTEIP6CWGPHTEGYAJSJSGGJQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "128.140.65.207:58566"}
2023-11-08T16:40:18+03:00 ERROR piecestore error sending hash and order limit {"error": "context canceled"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "C3DV2VNQVACHTOITXY2ADPQIAT6T4FVCXENZIKHMP4JCH6PK6LCA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "142.132.169.101:44490"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "O3C43N2LAJHZA646XZLRV6WD5H5G7G5T6WZER2XQG2DV53UMIBHA"}
2023-11-08T16:40:18+03:00 ERROR piecestore error sending hash and order limit {"error": "context canceled"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "TYQGGYUWVVY3D3MQ4FUZBVXNQPA6GS2LSJYHXVAAMHODEXLYOEJQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "128.140.99.246:37512"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "F6N66XLEGN4IPLLLRMTIZMQJBO4HXEYAHSVBOZZ7Q7YSYDFPU2CA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 2000128, "Size": 0, "Remote Address": "50.7.230.34:39008"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "R5AYEQZRHIQRU3VSYUFWH7L2DHQ6NNVSUQFKGSA2Q2DDSXSE5KTA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "Offset": 697600, "Size": 0, "Remote Address": "35.236.17.118:32826"}
2023-11-08T16:40:18+03:00 ERROR piecestore error sending hash and order limit {"error": "context canceled"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "R2LUS4SPESRNV5HA4F5WUSBLE544M6VRBJOVBADHHSVE4LPL6JYA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_AUDIT", "Offset": 2019072, "Size": 0, "Remote Address": "34.146.139.227:57280"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "S4265W4VHZAE7NCTG6DG4QV7GF3UESIXO6GTQDWDYAHF5Q2ZQV4Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "142.132.169.101:48890"}
2023-11-08T16:40:18+03:00 ERROR piecestore error sending hash and order limit {"error": "context canceled"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "5JQDYJHHO37JOLPLETTTHGIMMUJHFQBWRTBEG2665V5Q2ULZHIWA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "Offset": 0, "Size": 0, "Remote Address": "49.12.194.191:54800"}
2023-11-08T16:40:18+03:00 INFO piecestore download canceled {"Piece ID": "C4LVG6LBNDNFKDTJIJEWPCOPHEYEHASBUR34Z2RFLGWVH3625L4A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET", "Offset": 0, "Size": 0, "Remote Address": "79.127.220.97:34298"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "3B2YSOHNB3FPUIX2YW73NTISRHSAEJ3BET4VZQE6SMHLL4JMCLEQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "6XPPQYZNBEAWOLDI6EJZXR4VXEIXKRKO3EY5SRRKQ5GWMZTCITKQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "3A2RYYBWR2H6KQPMNBQB4RAE5QW66VWT7URHJI432UPHMVRIGQUA"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "6AGOMWUMK5HREWR5ACP36VSMKJ3UNNFL4K2SF37MYAVRVECS7AMA"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled {"Piece ID": "FCJE7BZIDUMSQFMTPIRV2OCJDOWCIE3GLODJIRZ3MCOCYRXDR5LA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 65536, "Remote Address": "143.244.60.38:50272"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "5DI4UNNZ6AJT7A2NUJAIHKSTYNYZYQ24ZLULIAUTIBKLXHHWFWGQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "G6YTNB4MVAYA3LCMYBV3UVHJ7NA5EXZAGCZQFUG242ZMFVWDKKLA"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "PDFVZLDNT5R2CTJJYIHUZNCJG7HABNODL5CDDXLJCEKIZVCBKFUQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "3WHVLIXEZVJH4GYZ7FIH5XKMOVR2XRW36J22IVNVZZDHJ7TG24QQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "NTEHC62FD3QE7XAN4FLQHZB6YQ5KI5OCY7CHSXNYOXCCMS2LANBA"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "ZCVTCCUXZZNTQBOWHD6Y7WHL3JTDR4JGDSIV22JHRPWYGGRL5JRQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "VM4UD56AOV7LRMK4GFAF6C3R63IFC2IOVI6OTLTYJANWVBIPXJQQ"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "MFNHWLMNABDZ4JGMI5RZXN5KTGWQZBVGBLQ75OY6TCXJC7E5KQVA"}
2023-11-08T16:40:18+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "SXMFMCKRQ55T2QIXEPSQYPT6364MO2Y2GS2K6A3FMJVQLBWG4CQA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "5Z2HJNTKRSJ4OPS65NU6ILRCXHVA2NHUFIUEPNC3FT7Q5Q22JL5Q"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "XBWAXK3XBEMWEHE6W3FQRY4VGFJUH2KPAPPSQJSM6QXEO5O4TUMA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "OVXNALTX67TGSPJG3Y3TUZQLTY436SDYH7G3ZEKCEUETTRTPQ4DQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "AMSSBJNVEFHD2ARUPFEP72V2DYZGMNXONWMB4TLZAXXZAG6B6DHQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "NUDMJWBQXCGLSHIM37IVJPCRCMTKMUP2SS6IIUYAJZNFI7AJLPCA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "KV6RXI3LQXKDGHSZTN3EX2SUCPLTA5S4Q7FVYQ4KDAT22R6724FQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "7BPCIMIKIILZB5TEK442BGGQ4QGHKBRK75YRTLYROBPVP3I3Z7GA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "ETI6EGMEDM5DBVFKS2BGODGWMPFTWQFDFZ4RGN2C64ADS3AWY4QA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "CWYH5IHPCDUDJNUHWJ32NJB77V755O7YHAQHYDQSXVJP45R6WY4Q"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "X2HSACSHH2EFHJAJI7FXG6G6XLWASGS3S4TG5PEX7UVGBB53YR5A"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "V56HMHL4IW7PCZLXJ6DI72FJP5LK2UNBPN7MHX6P75NGLMZLKWKQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "NORGXZI6MWZYB7IUFZNWIID3MB6RNPFVYRFCIV7C343U2X5OBB6Q"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "CTGKAESQQUSKESA4LIKO2KVSJA2NEA52GXQTQLUZNW7GWBOUSNNA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "OCHDP7XTTL3B77VSOQCA5ODINVGVVUA3FVAMH27FBNHV23LMTHOQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "XTUHBI6PFE6QDRRPZT2JULBFSEPTWHPFOBYAM2JMSC2BMBF3JYMQ"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "FTO3AE75C77AEOQO5Y5F5STASQR37A2DP2TBTMBUWQWLFSPWTGMA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "UUO2JUQQGAEUOX5Y4NETRENNK4FX6OURRD7XSGYUKP2SS36IAUZA"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "BQPJXE7FCYBD55VUFZV4KSBH3527BSJ3D63A5PMTYJE4IXBX3G5Q"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "OWHTM33227RJ6K6L2V7NRW6HTCFPBXRHQ6PLOEY2TBRDOTXP7P4Q"}
2023-11-08T16:40:19+03:00 INFO piecestore upload canceled (race lost or node shutdown) {"Piece ID": "CQNTSOLNFDOEYWOZDQ6ZS4JN4CHQBMRVTA5O2JRVYBMARWGLZMPQ"}
2023-11-08T16:40:19+03:00 FATAL Unrecoverable error {"error": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory", "errorVerbose": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2.1:176\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2:165\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}