Why storagenodes uses max cpu + takes up tons of memory and how to fix it

my order.db is only 755MB for my 13.5tb node…

what does orders.db do anyways?
i mean is there a good reason why @Vadim 's 5TB node is or was 2.5gb / 2.gb vacuumed … an operation that i refuse to do :smiley: (good thing i’ve managed to keep my orders.db small maybe… then)

Why your orders.db for a 7tb node is nearly the same size as my node which is twice the size…

ofc not everything scales in a linear fashion , like say if orders are the number of customers on a storagenode… then there might be vast differences in the sizes of the databases depending on a wide number of factors…

just wondering if there is a sense or reason to why orders.db would have vastly different sizes…

1 Like

my old 6TB node has orders.db with 670MB (but I did vacuum it a few months ago).

1 Like

The issue with SMR drives is that over time blocks stored by STORJ will be deleted in a random manner across the whole of the storage space used by STORJ. When a new block is placed in the resulting free space all the disk area allocated to the particular SMR zone must be rewritten. For many disks this zone area maybe 256MB in size, so far larger than the data blocks of 2MB that STORJ is trying to store.

What makes things even worse is that STORJ also stores its database on the same drive, so depending on the feature set of the disk in use every write back to the database could cause a SMR zone rewrite as database pages and index files get updated/modified.

The result is that you are very dependent on the drive’s ability to hide the limitations of the SMR structure and/or the file system’s ability to modify the way it handles writes to a SMR drive. Ether way a nearly full drive will start to show performance issues if it is receiving a constant supply of delete and new block requests.

1 Like