Disk usage discrepancy?

That is with my original data?

I did search the forums - even referenced what I found in my posts.

Please do not limit new accounts when they are created in order to get assistance and even request that the mods not limit the postings - as was trying to get assistance. Certainly not spamming.

That is not a very conducive to those seeking help in a timely manner.

Correct - each one was restricted in several manners.

Your data doesn’t go anywhere that’s stored in the storage folder. If by β€œyour data” you are referring to the dashboard statistics then yes it will get restored in due time.

I completely understand your situation but those rules are set for a bigger reason. In extreme cases you can always file a support ticket.

2 Likes

Thank you kindly! - Look forward to you changing the rules so quickly.

I would also certainly hope that as a technology forum - you would be able to discern between bots spamming your forums and new users seeking assistance.

Thanks for the link. Not sure why would you expect a newly registered user to find that link in such a situation as documented in the prior messages.

As a new user, do you not see the below menu or is it hidden ? Its at the top of the page.

1 Like

New accounts will be limited because they didn’t read the forum and did not try to use a search at least. If you pass this simple test - your account will not be limited.

The problem is widely discussed in this thread, so I moved yours another one thread here.

There are two (three) reasons:

  1. You have database related errors in your logs
  2. You have filewalker related errors in your logs
  3. You have both

Solution depends on what’s kind of error did you meet. If it’s a malformed, not a database, then you need to use these articles:

Both can be solved with the late article, but with a downside: the current stat and history will be lost. However, it wouldn’t affect the reputation and/or payout.

The second type of errors, related to a filewalker, can be solved by optimizing the filesystem, or by disabling a lazy mode.
The first is preferred, of course, however, the second one is an ultimate solution. In expense of the lost races of course.

Optimizing the filesystem includes:

  1. check and fix errors on the disk
  2. disable 8dot3 for NTFS: NTFS Disable 8dot3name
  3. disable atime (for NTFS: [Solved] Win10 20GB Ram Usage - #17 by arrogantrabbit), for Linux - please, use the search here or in the internet
  4. do a defragmentation if NTFS and enable the automatic defragmentation, if you disabled it (it’s enabled by default)
  5. Disable indexing (Windows only)
  6. Move databases to SSD (for Windows: Move databases on Windows storagenode - #2 by Alexey, for docker: How to move DB’s to SSD on Docker)
  7. If you have a managed UPS, enable the write cache (for Windows - in the disk volume Policy, you need to select both checkboxes)
  8. Add more RAM, if possible. Or add SSD cache before the disk subsystem (for Windows it’s possible too, but you need to use a tiered Storage).

That is not accurate. You can see from the posts - in which I referenced a suggestion which was found in another post via a search.

I do not know what the error was (more so what it meant) other than what was posted from the logs.

What is a late article? What does the current stat and history mean?

What are races?

please search for error and database, error and filewaker in your logs.
If you are on Windows (PowerShell):

sls error "$env:ProgramFiles\Storj\Storage Node\storagenode.log" | sls "database|filewalker"

this one

The current stat - what you would see on the dashboard. History, is what you can see in the Payout Info for this and previous months.

When the customer want to upload a file, their uplink requests 110 nodes for each segment (64MiB or less) and start uploads in parallel. When the first 80 are finished - all remained got canceled. There you would see a lot of errors on PUT* requests, most of them β€œcontext canceled” or similar (because the uplink will not notify the losers, it will abruptly close remained connections).
The same for the downloads, but the customers’ uplink will requests 39 nodes and start downloads in parallel, when the first 29 are finished, all other will be canceled (they need only 29 pieces from the (current) 80 to reconstruct the file). You will get various errors on GET* requests in your logs in this case.
Both cases are called a long tail cancelations (cuts) to allow the customer to upload/download to/from the fastest nodes for their location.

Disclaimer: your node cannot be close to everyone customer in the world, so canceled uploads/downloads is a normal process.

How would I disable lazy mode exactly?

DB is now on SSD.
Still random restarts and bad uptime.
I dont see any fatal errors (I think the log period shown is to short).
How can I see the fatal error log after node restart?

Thank you!

That’s mean that you also have Unrecoverable errors in your logs. Please search for them, they may explain what’s wrong.

It will be the last line in the log before restart.

1 Like

Ok. Found several instances of error and database but filewaker was not found. Do you want the output of that command?

Would that be the Disk Utilization & Remaining? Some other user told me that would go back to what is was. Is that true?

I do not really understand that - but thanks for advising. Seems like I do not need to be concerned with the technical details as you note.

Thank you.

I can see it live dying at the moment

:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-07-13T14:24:21+02:00       ERROR   piecestore      upload failed   {"Process": "storagenode", "Piece ID": "WAFAK6I7TEPQBBCGN6VD2DMUC2VYWLC5CHLP4ARHXGCZZPEVLQGA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.236:38598", "Size": 131072, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-07-13T14:28:13+02:00       ERROR   services        unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory", "errorVerbose": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2.1:175\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2:164\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T14:28:13+02:00       ERROR   gracefulexit:chore      error retrieving satellites.    {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:200\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:212\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:55\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:48\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T14:28:28+02:00       WARN    services        service takes long to shutdown  {"Process": "storagenode", "name": "retain"}
2024-07-13T14:28:28+02:00       WARN    services        service takes long to shutdown  {"Process": "storagenode", "name": "piecestore:cache"}
2024-07-13T14:28:28+02:00       WARN    services        service takes long to shutdown  {"Process": "storagenode", "name": "collector"}
2024-07-13T14:28:28+02:00       WARN    services        service takes long to shutdown  {"Process": "storagenode", "name": "forgetsatellite:chore"}
2024-07-13T14:28:28+02:00       WARN    servers service takes long to shutdown  {"Process": "storagenode", "name": "server"}
2024-07-13T14:28:28+02:00       WARN    services        service takes long to shutdown  {"Process": "storagenode", "name": "pieces:trash"}

not necessarily. They usually split in to two categories:

  1. Database is corrupted (malformed, not a database, etc.)
  2. Database is locked

For the first case - try to fix them, if wouldn’t work (or too complicated) - re-create it, using the second guide (for file is not a database). Of course, you will lose stat and history, but it’s quick and simple.

For the second case - you have two options:

  1. optimize the filesystem to be fast (perhaps, expensive, because the best options are to add RAM and/or SSD);
  2. move databases to another disk (preferably - SSD)

This may happen, if the databases are not updated, due to a failed failewalker or the corrupted or locked databases. If the filewalker is successfully finished for all trusted satellites and successfully updated the databases, then after restart you may lose only a change for the last hour (because the dynamic stat is flushed to the databases every hour by default).

Perhaps you have a FATAL error somewhere?

Screenshot_1

On June 20th, there was an error where the node that was fully occupied showed that the actual space used was reduced by 40%.

As advised, I set pieces.enable-lazy-filewalker: false and waited.


Screenshot_4

Until today (the 14th), the graph showed that the node capacity was continuously decreasing, but today it rose sharply and reached the original capacity. Of course, overuse occurred because the ingress received was more than the HDD capacity. (It is expected to disappear when you remove the trash can after 7 days.)

All I did was set it to false and wait. Fortunately, I think it’s encouraging that no further action is needed. Thank you for your help.

2 Likes

I cant find any fatal error. This is all I got from the last crash:

2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "CRKXDENRMQY7GFURWRG2JXFQIN7BLV4FLPXICTF7TBG6X2J6F6PQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 1280, "Remote Address": "49.12.194.191:36360", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "4HAUMOEH2ZAIXCZVE6X6WNSG6VXIQ43I2BX73GM7MPJMZ2EWSIPQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "199.102.71.27:39616", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "MT27W5QIBXE7AOTZW663NG3IH4MQEJNPECIDFRVAB7UGWD4ADCQQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 11264, "Remote Address": "162.55.54.15:55442", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "VF7DVJRGRBZFPAJR3NNRQQPS2Z5M5252ANS4P5AJGQXWKI6GDYXA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 17152, "Remote Address": "167.235.60.217:44912", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "66HTP3WZRMR2DBQQ7R5CAV7Z6HLSWXE4JOZW4DY5G6JZREQNTYXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 28672, "Remote Address": "199.102.71.57:38392", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "73MR5KXKUTADNP7XSMIQGH7HJXGLPDN6VNBBFI5CMXNO4SNKHPYA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 2203648, "Remote Address": "188.245.33.215:55950", "error": "context canceled"}
2024-07-14T12:19:19+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "FJ3AKEOERC5ADBSVHLABPDUDOP3DKFYUSDXV2VYWC3VFITK6NKFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 15104, "Remote Address": "5.161.236.172:55824", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "27Y5DJFAL454YSQ522WLOOBPKDXD36QAKJWD2KIX4JCCTRBUFI7Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 14336, "Remote Address": "199.102.71.67:51580", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "TUYUZYO5A5GKOL7XWVPHUNNIHBSRXFL5HTTTWXSWM4HHYBNJUO2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 42496, "Remote Address": "199.102.71.68:49804", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "TZRV4O332QHROVAHPWFN6S3AFMGLHWQDZSQE3BT3RTUOGWH4IIHA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2174464, "Remote Address": "199.102.71.22:40136", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "SDEDT3RP2R36WPNZR7LQJLL6657KJDSCNHJAAMEHMVBDJ7DCLPDQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 33024, "Remote Address": "162.55.54.15:60944", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "EMA7JZDOHFI6BJL653CZUPMMH66QESR24JPQY6QNQWTBVIAMHQEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 38144, "Remote Address": "5.161.251.159:41284", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "LRTUIWASA54YGHQJ7VKA2GEXEJFB6BGEG3ZAKTLHTXSZ4PI7VCQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 36608, "Remote Address": "5.161.118.173:35740", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "PHTUDLFZZCVCGXZKWFQMUDCXKZFZ7XK6NE5DAYQQ7ODPD366WCNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 25600, "Remote Address": "5.161.244.150:59034", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "PCZFIW6EAK4OCY2YYKON4ZRNYFYNCWZHZSFEMZRUCY3GOF7RDSRQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 1811456, "Remote Address": "167.235.19.47:51526", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "UKA5FGCNRIFSOWCUQZNPUQFW7BAXDMAY3QZD5CLPAIUVZA63OH5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 124672, "Remote Address": "5.161.176.200:54384", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "AO5GA3URTSHZOBJPH77CV5KKNYKN4FDN7NLOU2HVL72BFGWC2AMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 14080, "Remote Address": "5.161.214.198:60980", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "QU4Z6FWPAU4Z6Q5PXYZOXT5YTB2HSWULDVGXBRWJLXIQHLXHMASA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 11520, "Remote Address": "199.102.71.26:51346", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "VRYPSJTFOUPRHQQ3XE5TKDKWJFZHZJHZYV24CVJARD7QRARLZ2XA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2816, "Remote Address": "5.161.124.130:37934", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "WFAI7VAWJJECPRQA6ZB6INAK46JVFXFIO7AEL53I3HUWBFVVFXNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 15360, "Remote Address": "5.161.71.64:35888", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "NQRCO5UHOEYYXPGDFMRKFUANNYS62CKYHXJTAJ5BYQP5TIL4XMSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 17152, "Remote Address": "5.161.244.150:60742", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "CXVOC6T445BY4MFYFLTRODA5UMJ5IBGLVFEYW3DQ7T4Z765AFFIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 24576, "Remote Address": "5.161.71.64:36344", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "I3GUOLOISAV3CAQQTPKHGNZG2D5PWMBCAJPX4ZXJGBOJVKQDNR6Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "159.69.33.236:49596", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore:cache        error getting current used space:       {"Process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "SVX6PUPY7RKJZKS55U52ITJ3OOFMTJXILNH3V4XJEG22CEAXTD5A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 1536, "Remote Address": "168.119.117.136:38502", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "Y2YP4UOVSOSHN67BRKWSJV5AIBQWEPJ3ZZAM7DKKXHJIF7GX5WSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 3072, "Remote Address": "199.102.71.23:58970", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "Z47ADJ6V7PGS3QOQOMNL3PAIRYUKEDXAGZOZ37MQGCHWPRNAZYJQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 2560, "Remote Address": "91.107.232.65:34204", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "BTLAXU4477RG6ODMMZKYRB5ZBLKSAV3V5HYCOS7CHNDZZZIKHZLA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 210432, "Remote Address": "78.47.255.171:58396", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "UM24RQM6PMRQB6O5BGTF5V75IN6MICAXRUMGTXO23Y2XUKPFTJPA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 768, "Remote Address": "116.203.134.162:55794", "error": "context canceled"}
2024-07-14T12:19:20+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "BH5O33ZECODGJJNLBINHNVI33B2UL6KOEBHMGSJ2KPWGKOOCWTUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 10240, "Remote Address": "5.161.209.180:45774", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "IK4KIUEQ5JJG4R5MFZDWUDHUT74NIGQQXHTGUS6NDAZQ4ITQXIAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2304, "Remote Address": "199.102.71.16:38616", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   retain  retain pieces failed    {"Process": "storagenode", "cachePath": "config/retain", "error": "retain: filewalker: context canceled", "errorVerbose": "retain: filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:181\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:568\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:373\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:259\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "RBLD36QJYF4RNS7UG5NPKWCLD7UPZGSF6NGWSV4PDBXRA63HRUYQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 36096, "Remote Address": "5.161.92.21:44236", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "ET2QGWIF5YDB3BIFSXEH2IZPRK34DM6TYLIB7C2XGGLR77PEGTOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "199.102.71.63:46860", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "VPD72IIQZAMUKZUHG6GF5OCNCF6IBGPVCMH7UMYPUBLGOTDF5PFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "199.102.71.26:49520", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "46UWFPFU7L7AJT5ZWVRVKE4VTR52523WYIWRYTVFNFNLMZMK3OXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "5.161.235.180:37788", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "C6GARGJONTKMFGQ66DAJYRFYYPIJ2EWZ2OHPWHJMUQLROL343FDQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 287488, "Remote Address": "199.102.71.60:54528", "error": "context canceled"}
2024-07-14T12:19:21+02:00       ERROR   piecestore      error sending hash and order limit      {"Process": "storagenode", "Piece ID": "OUXVP5VKF7KH33J6WVPASIL265DDAU6JNFOCQDOCY3JPS7A7MDAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 44800, "Remote Address": "199.102.71.70:38806", "error": "context canceled"}
2024-07-14T12:19:25+02:00       ERROR   orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       failed to settle orders for satellite     {"Process": "storagenode", "satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:294\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:231\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:25+02:00       ERROR   orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      failed to settle orders for satellite     {"Process": "storagenode", "satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:294\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:231\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:25+02:00       ERROR   orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      failed to settle orders for satellite     {"Process": "storagenode", "satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:294\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:231\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:25+02:00       ERROR   orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      failed to settle orders for satellite     {"Process": "storagenode", "satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:294\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:231\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-14T12:19:26+02:00       ERROR   failure during run      {"Process": "storagenode", "error": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory", "errorVerbose": "piecestore monitor: timed out after 1m0s while verifying writability of storage directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2.1:175\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func2:164\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: timed out after 1m0s while verifying writability of storage directory
2024-07-14 12:19:26,236 INFO exited: storagenode (exit status 1; not expected)
2024-07-14 12:19:27,239 INFO spawned: 'storagenode' with pid 997
2024-07-14 12:19:27,245 WARN received SIGQUIT indicating exit request
2024-07-14 12:19:27,246 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-07-14T12:19:27+02:00       INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2024-07-14 12:19:27,265 INFO stopped: storagenode-updater (exit status 0)
2024-07-14 12:19:27,267 INFO stopped: storagenode (terminated by SIGTERM)
2024-07-14 12:19:27,267 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
2024-07-14 12:19:31,135 INFO Set uid to user 0 succeeded
2024-07-14 12:19:31,146 INFO RPC interface 'supervisor' initialized
2024-07-14 12:19:31,146 INFO supervisord started with pid 1
2024-07-14 12:19:32,148 INFO spawned: 'processes-exit-eventlistener' with pid 12
2024-07-14 12:19:32,151 INFO spawned: 'storagenode' with pid 13
2024-07-14 12:19:32,153 INFO spawned: 'storagenode-updater' with pid 14

That’s it. See Fatal Error on my Node.

1 Like

Thank you. Got it :smiley: I searched for the word fatal.
I will move to the other topic :slight_smile:

Perhaps we changed the logging?
But I also do not see a Unrecoverable error in your excerpt, too :thinking:

1 Like

For Saltlake, the bloom filters are distributed regularly again. Unfortunately, this is not the case for the other 3. When will filters be distributed again regularly for the 3? Or why are they not being sent regularly at the moment?
Unfortunately I have 48.45 TB Used Data and 19.56 TB Unpaid Data

running filewalkers an used data
═══ Node 01 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      5.15 $        Uptime:   7d 15h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   11.55 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        11.53 TB                             β”‚β”‚   SL  3d 0h ago   0d 14h ago  2d 6h ago   β”‚
β”‚ Unpaid Data:       5.20 TB                             β”‚β”‚  AP1  unknown     0d 14h ago  7d 15h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 14h ago  1d 10h ago  β”‚
β”‚                                                        β”‚β”‚  US1  unknown     0d 5h ago   1d 13h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 02 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      7.21 $        Uptime:   6d 17h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   16.16 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        14.09 TB                             β”‚β”‚   SL  2d 5h ago   2d 17h ago  running     β”‚
β”‚ Unpaid Data:       5.97 TB                             β”‚β”‚  AP1  unknown     1d 17h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     1d 17h ago  unknown     β”‚
β”‚    Report Deviation: 12.43%                            β”‚β”‚  US1  unknown     2d 17h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 03 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      8.88 $        Uptime:   7d 15h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   19.96 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        16.47 TB                             β”‚β”‚   SL  3d 8h ago   0d 6h ago   1d 6h ago   β”‚
β”‚ Unpaid Data:       6.43 TB                             β”‚β”‚  AP1  unknown     0d 6h ago   7d 3h ago   β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 6h ago   7d 3h ago   β”‚
β”‚    Report Deviation: 16.08%                            β”‚β”‚  US1  unknown     0d 6h ago   7d 4h ago   β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 04 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.84 $        Uptime:   7d 15h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    1.88 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:         2.38 TB                             β”‚β”‚   SL  2d 0h ago   0d 15h ago  7d 15h ago  β”‚
β”‚ Unpaid Data:     515.78 GB                             β”‚β”‚  AP1  unknown     0d 15h ago  7d 15h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 15h ago  7d 15h ago  β”‚
β”‚    Report Deviation: 84.43%                            β”‚β”‚  US1  unknown     0d 15h ago  7d 15h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 05 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.23 $        Uptime:   6d 15h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    1.09 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:         1.27 TB                             β”‚β”‚   SL  0d 8h ago   0d 15h ago  6d 15h ago  β”‚
β”‚ Unpaid Data:     338.22 GB                             β”‚β”‚  AP1  unknown     0d 15h ago  6d 15h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 15h ago  6d 15h ago  β”‚
β”‚    Report Deviation: 57.24%                            β”‚β”‚  US1  unknown     0d 15h ago  6d 15h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 06 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.07 $        Uptime:   5d 21h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.38 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       552.10 GB                             β”‚β”‚   SL  0d 0h ago   0d 21h ago  5d 21h ago  β”‚
β”‚ Unpaid Data:     228.40 GB                             β”‚β”‚  AP1  unknown     0d 21h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 21h ago  unknown     β”‚
β”‚    Report Deviation: 57.33%                            β”‚β”‚  US1  unknown     0d 21h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 07 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.08 $        Uptime:   5d 17h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.38 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       545.25 GB                             β”‚β”‚   SL  0d 4h ago   0d 17h ago  unknown     β”‚
β”‚ Unpaid Data:     228.40 GB                             β”‚β”‚  AP1  unknown     0d 17h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 17h ago  unknown     β”‚
β”‚    Report Deviation: 60.19%                            β”‚β”‚  US1  unknown     0d 17h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 08 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.05 $        Uptime:   4d 21h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.28 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       458.90 GB                             β”‚β”‚   SL  unknown     0d 21h ago  unknown     β”‚
β”‚ Unpaid Data:     237.44 GB                             β”‚β”‚  AP1  unknown     0d 21h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 21h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  US1  unknown     0d 21h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 09 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.09 $        Uptime:   6d 15h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.43 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       627.96 GB                             β”‚β”‚   SL  0d 5h ago   0d 15h ago  unknown     β”‚
β”‚ Unpaid Data:     244.43 GB                             β”‚β”‚  AP1  unknown     0d 15h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 15h ago  unknown     β”‚
β”‚    Report Deviation: 59.60%                            β”‚β”‚  US1  unknown     0d 15h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 10 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.09 $        Uptime:   3d 23h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.43 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       629.21 GB                             β”‚β”‚   SL  0d 1h ago   0d 23h ago  3d 23h ago  β”‚
β”‚ Unpaid Data:     251.17 GB                             β”‚β”‚  AP1  unknown     0d 23h ago  3d 23h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 23h ago  3d 23h ago  β”‚
β”‚    Report Deviation: 57.36%                            β”‚β”‚  US1  unknown     0d 23h ago  3d 23h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═════════════════════════════════════════ All Nodes - Summary ═════════════════════════════════════════

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current total:     22.69 $                             β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated total:   52.54 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk used:        48.49 TB                             β”‚β”‚   SL  0 running   0 running   1 running   β”‚
β”‚ Unpaid Data:      19.60 TB                             β”‚β”‚  AP1  0 running   0 running   0 running   β”‚
β”‚                                                        β”‚β”‚  EU1  0 running   0 running   0 running   β”‚
β”‚                                                        β”‚β”‚  US1  0 running   0 running   0 running   β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

trash folders
/mnt/storj/node001_2021.10/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   └── 2024-07-09
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

6 directories
/mnt/storj/node002_2022.04/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   β”œβ”€β”€ 2024-07-09
β”‚   └── 2024-07-15
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β”‚   └── 2024-07-05
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
    └── 2024-07-06

9 directories
/mnt/storj/node003_2023.12/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   └── 2024-07-09
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

6 directories
/mnt/storj/node004_2024.06/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   β”œβ”€β”€ 2024-07-09
β”‚   β”œβ”€β”€ 2024-07-11
β”‚   └── 2024-07-13
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

6 directories
/mnt/storj/node005_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    └── 2024-07-15

3 directories
/mnt/storj/node006_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    └── 2024-07-15

3 directories
/mnt/storj/node007_2024.07/storage/trash/

0 directories
/mnt/storj/node008_2024.07/storage/trash/

0 directories
/mnt/storj/node009_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    └── 2024-07-15

3 directories
/mnt/storj/node010_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    └── 2024-07-15

3 directories

Yeah, we share double the capacity for the same price it seems… this trash+file walker problem it’s endless.

then it will trash your node every 2 days, it need go throw all your files every time.