Here’s an image of two nodes on the same subnet. There are 25 nodes on the same subnet. 24 of them are showing the same behavior; nodes are sized from 500-650GB, and have all completely stalled in growing in size. The 25th node is sized at 5TB, and gets healthy ingress every day.
This is not really a problem so to say, but I would like for the 24 small nodes to fill up and then stop ingressing, instead of holding back at around 80%
All nodes are dockerrized, have been running for years and are healthy. All nodes are running Hash Store, and were upgraded to that on the exact same time. I don’t come to this location very often, and last time I did was back in summer, where I set the nodes to do the Hash Store conversion - I must have left the 25th node on 5000GB instead of 500GB.
I don’t know that this has anything to do with anything: but I often see nodes that hover around 70-90GB Free. No matter how large the disk is, no matter how much trash, no matter how long they’ve been running.
I can’t explain it. That final space never fills. I can add 100GB to the node… it will fill that new space… and pause at 70-90GB Free again.
I agree, it sounds like something like that, but all of the nodes differ slightly in size, and it does not seem like it’s a fixed amount of space in GB, but in percentage of total node size instead. Other than manually starting the Hash Store conversion, these nodes are bog standard with no exciting configuration changes
I see this happening to my node too. It’s fully vetted, has a 500GB allocated space, but it stopped getting ingress at 360GB occupied. It has 14% space left.
I don’t use dedicated disk option and it’s fully migrated to hashstore. After migration, I cleaned the blobs and deleted all db-es (my standard procedure). No lazzy walker. Start up piece scan active. Ver 1.142.7. Docker in Ubuntu.
What would be the cause of this? I’m pretty sure is related to hashstore.
I have the same problem. On the node dashboard, I have 350 GB free, but it never fills up. On the Multinode dashboard, I only have 5-10 GB. The node is 1.142.7, the multinode is the latest version. No dedicated disk. Allocated 2.5 Tb
I don’t use multinode dashboard, but the underlaying storage is the same RAID0 (BAD, I know) volume for all the 25 nodes. It’s two 18TB drives, so there’s plenty of space left on it, and each individual subfolder representing a node corresponds with what each node thinks it’s using itself