When will average node space be the same as my currently used space?

Please check the image below and help me understand when the average will be the same as the used space. This node has been full for 10 days, but I haven’t noticed any increase in the average in current month.

Will the average be the same as the used space on January 1, 2024, or am I missing some details?

Thank you!

Please check:

Wich filesystem and clustersize are on the drive?
How big and what type number?
in the blobs folder, are there 4 or 6 folders?
Fatal errors in the log?
Log to big to open?
defragmentation ?
Temp folder not empty when node stopped?
in gerneral provide hardware, internet and software info.

I’m using the ext4 filesystem. More details about the HDD are visible below:

Details about disk (tune2fs command)
tune2fs 1.45.5 (07-Jan-2020)
Filesystem volume name:   node-hdd-1
Last mounted on:          /media/root/node-hdd-1
Filesystem UUID:          b3a928af-640e-498b-998c-37f24ddc56d4
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              274661376
Block count:              4394582016
Reserved block count:     219729100
Free blocks:              1327911409
Free inodes:              201513567
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
Flex block group size:    16
Filesystem created:       Thu Sep 14 16:29:18 2023
Last mount time:          Wed Dec  6 14:55:39 2023
Last write time:          Wed Dec  6 14:55:39 2023
Mount count:              28
Maximum mount count:      -1
Last checked:             Thu Sep 14 16:29:18 2023
Check interval:           0 (<none>)
Lifetime writes:          6045 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      7fb02468-4583-4ca2-8344-fbf6ec691f8c
Journal backup:           inode blocks
Checksum type:            crc32c
Checksum:                 0xb149fc22
  • In the blobs folder, there are 4 subfolders.
  • I don’t see any fatal errors in the log.
  • Defragmentation has not been performed yet.
  • Disk model: TOSHIBA MG09ACA18TE.
  • CPU: AMD Ryzen 3600.
  • RAM: 32GB DDR4.
  • Internet: 5G modem with speeds of 300/100 Mbps.
  • OS: Ubuntu 20.04.2
  • Storj node version: v1.92.1
  • Suspension and audits score: 100%

I’m using four identical HDD drives. I see daily average increase on the 3 other nodes, but those nodes are not completely filled up.

It’s possible that below argument in my node-start command causing this problem?

Thank you for help.

yes. You need to enable it, since your accounted disk usage is differ from the actual disk usage.
Please also search for errors, related to gc-filewalker, lazyfilewalker and retain in your logs.
As soon as your node would start to finish a filewalker and retain, the latter will move the garbage to the trash. After 7 days it will be permanently removed.


Perfect, i will try today :slightly_smiling_face:
Thank you.

I have multiple nodes with this issue, all of which are old nodes. The newly created nodes do not have this issue. I wonder if it could be due to incorrect filter parameters

I moved your report here, because the previous one is talking about trash discrepancy, not between average used space and the actual used space.
Your issue 100% related to not working Garbage Collector/Retain.
Please search for errors related to gc-filewalker, lazyfilewalker and retain in your logs.
Until the filewalker will not finished successfully, the reatain process will not move the deleted data to the trash to match usage reported to the satellites (the left graph).

small differences if the node is full can be undeleted test data

or spammed temp folder

1 Like

Also FATAL errors that interrupt filewalks and GC should be searched in the log.
(if it is configured to auto restart)

1 Like