you’d have the storj node software running four times. Each instance would listen on a different port (and web dashboard on different port if you do that). And each instance would have a separate drive for storage.
You could probably shrink your node to a more manageable size (because of all of the trash activity) and migrate it.
There are instructions on migrating to new drives using rsync. And yes it can take days or weeks. it’s a pretty miserable wait.
yes, it will reduce the load and maybe the node can finish the housekeeping faster.
Usually yes, because they would not compete for the same resource - the same pool, if you ran them on the one pool. And also any RAID with redundancy is usually working with the speed of the slowest drive in the pool.
Likely yes, you may also use rclone sync instead of rsync in the guide linked above, it could work faster.
@Alexey & @EasyRhino Thanks for all the suggestion.
This morning i woke up to a totally different node!!!
Low and behold, something has changed: dashboard show 8.1TB Used, 2.11TB Free and 7.79TB Trash, statistics in Active Insight show all 4 drives now the same, this happend suddenly around 22uur CET
So i guess all the efforts i did paid off and the node has now caught up with the work.
Will let it keep doing its thing and when trash has come down am going to use the “1 Node per HDD” strategy.
UPDATE:
I now see lots of these warnings:
|2024-09-05T10:11:18Z|WARN|collector|unable to delete piece|{Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: NG2OJJ2MT3742CL2BASRUL4CF6YMB2SEP322J6Q5QQQWM5NUAELQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).Stat:68\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:362\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}|
|---|---|---|---|---|
|2024-09-05T10:11:18Z|WARN|collector|unable to delete piece|{Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: XWP4WAVIP46Q5OIZCN5LNTDSNJUADCG4Y4XCK75BTN4AXNUT4BIQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).Stat:68\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:362\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}|
|2024-09-05T10:11:18Z|WARN|collector|unable to delete piece|{Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: PM65JSQIVBDEGU7VELI7MSG36TMYMXWDMI4QPYLSKWV62A5ZFLWQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).Stat:68\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:362\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
Hey @RozzNL, while it might show up as a warning, no need to worry—definitely nothing to lose sleep over! It just means GC already removed the piece before it expired, which is actually pretty common. I therefore believe these logs really should be info messages instead.
yeah we get spammed with those fail to delete warnings. They are annoying and everyone gets them. hopefully they will change to DEBUG in a future release.
You mentioned you have trash folder from way old like january.
look at your logs for “empty”. If the latest entry says “emptying trash started”, then let it run. it could take hours or days of processing the deletes with so much trash. But if it says “emptying trash finished”, then I would personally feel ok manually deleting any trash folder earlier than august 25th or so.
Yeah it says finished…i lowered the disk space to 9TB and will let it run.
Other thing im already doing for the multinode setup…is multinode, i have the docker image running, this is all good, but when i want to create an api key i get the following:
docker exec -it Storagenode /app/storagenode issue-apikey --log.output stdout --config-dir config --identity-dir identity
2024-09-05T20:40:49Z INFO Configuration loaded {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-09-05T20:40:49Z INFO Anonymized tracing enabled {"Process": "storagenode"}
2024-09-05T20:40:49Z INFO Identity loaded. {"Process": "storagenode", "Node ID": "xxxxxxxxx"}
Error: Error starting master database on storage node: Cannot acquire directory lock on "config/storage/filestatcache". Another process is using this Badger database. error: resource temporarily unavailable
Then they would need to restart the node to update the usage, otherwise it will be way off for the trash usage.
@RozzNL So, I wouldn’t recommend to delete anything in storage folder manually. In a best case the stat will be wrong or the node could be disqualified in a worst case.
Since you enabled a badger cache, it cannot allow access from multiple processes.
To avoid such a problem you need to enable the badger cache not in the config file but as a command line option after the image name in your docker run command, i.e.
The issue-apikey command wouldn’t throw an error, see
Thanks @Alexey that last command did the trick, also had to use the internal ip address instead of the public address, but all up and running for the first node now!
Looking into getting the correct configuration for all the 4 nodes in Portainer Stacks which i use alot.
I hope in the coming days that the storage use / trash will go down so i can rsync clone the node.