Multiple nodes on same subnet; only one getting traffic

Here’s an image of two nodes on the same subnet. There are 25 nodes on the same subnet. 24 of them are showing the same behavior; nodes are sized from 500-650GB, and have all completely stalled in growing in size. The 25th node is sized at 5TB, and gets healthy ingress every day.

This is not really a problem so to say, but I would like for the 24 small nodes to fill up and then stop ingressing, instead of holding back at around 80%

All nodes are dockerrized, have been running for years and are healthy. All nodes are running Hash Store, and were upgraded to that on the exact same time. I don’t come to this location very often, and last time I did was back in summer, where I set the nodes to do the Hash Store conversion - I must have left the 25th node on 5000GB instead of 500GB.

All nodes were created at roughly the same time.

Kind regards

1 Like

I don’t know that this has anything to do with anything: but I often see nodes that hover around 70-90GB Free. No matter how large the disk is, no matter how much trash, no matter how long they’ve been running.

I can’t explain it. That final space never fills. I can add 100GB to the node… it will fill that new space… and pause at 70-90GB Free again.

Maybe it’s all in my head :winking_face_with_tongue:

2 Likes

It’s what I generally see at other sites also, but then again, I have several nodes who just sits happily at 100% utilization.

I’d love to hear input from a developer :slight_smile:

1 Like

It sounds like a reserved space when you use a dedicated disk feature.
The emergency threshold without it is 5GB as far as I know.

I agree, it sounds like something like that, but all of the nodes differ slightly in size, and it does not seem like it’s a fixed amount of space in GB, but in percentage of total node size instead. Other than manually starting the Hash Store conversion, these nodes are bog standard with no exciting configuration changes

I see this happening to my node too. It’s fully vetted, has a 500GB allocated space, but it stopped getting ingress at 360GB occupied. It has 14% space left.
I don’t use dedicated disk option and it’s fully migrated to hashstore. After migration, I cleaned the blobs and deleted all db-es (my standard procedure). No lazzy walker. Start up piece scan active. Ver 1.142.7. Docker in Ubuntu.
What would be the cause of this? I’m pretty sure is related to hashstore.

1 Like

what’s reported by metrics or multinode dashboard?

I have the same problem. On the node dashboard, I have 350 GB free, but it never fills up. On the Multinode dashboard, I only have 5-10 GB. The node is 1.142.7, the multinode is the latest version. No dedicated disk. Allocated 2.5 Tb

Node dasboard

Multinode

os

Ingress is fluctuating, as if he saw the disk full

I don’t use multinode dashboard, but the underlaying storage is the same RAID0 (BAD, I know) volume for all the 25 nodes. It’s two 18TB drives, so there’s plenty of space left on it, and each individual subfolder representing a node corresponds with what each node thinks it’s using itself

I don’t use multinode dashboard. What metrics? How to check them?

you need to to config debug.addr: “0.0.0.0:7001”
or something like that in doker on start, then when you go to node ip:7001/

1 Like

I have that, but what should I look for in those metrics?