This is only mean that your storage is slow. The storagenode process would consume the RAM only when the disk is not able to keep up.
hmmm… do we have any kind of limitation of memory usage ? this situation appeared only after new garbage collector was pulled to production. the most funny thing is- 2 other nodes are hosted on the same disk- no problem with memory utilization so this is not connected to storage issue )))
another point- what about several running instances of one node- up to 6 process for each node? i can understand 5 (1 running node and 4 garbage collectors/ bloom filters - one for each satellite), but 6?!
check the hdd it is very big memory consumption, 30-50 mb is ok for high load, this big mean your hdd works very slow. may be make checkdisk?
as u can see other nodes has memory consumption of 4 gb per node.
i can now definitely explain why they are different
in-memory buffer for uploads
filestore.write-buffer-size: 40.0 MiB
i was testing on one node huge buffer size, while other:
in-memory buffer for uploads
filestore.write-buffer-size: 4.0 MiB
but still why 4-6 processes per node?
this is forbidden by Supplier Terms & Conditions for exactly the reason that nodes will inevitable affect each other.
I can only state that you shouldn’t host more than one node on the same disk or the same RAID pool. It’s not only useless, if they are in the same /24 subnet of public IPs (they would got the same as a one node in total), but also stressful for the disk, and as proved by you - consumes extra RAM and have a high load 24/7.
the node and 5 lazy filewalkers. In a normal circumstances (1 node/1 disk or 1 node/RAID pool) you shouldn’t see them all in the same time.
With a such setup (multiple nodes per disk) each node likely will have all 6 instances running in the same time, because your poor and unfortunate disk is suffering from the actions of his master
Stop all other nodes but one on that disk, give it a breath and check the all problems which you have after a week (and you likely gathered all, as far as I understand: the usage discrepancy, the trash didn’t update, RAM is overused, perhaps high CPU usage and of course 100% disk load 24/7, nodes crashes, perhaps databases are locked or already corrupted… well done…).
u can be absolutely right, but not in that case :))))
sas 16tb disk connected to sas 24gb controller, single node
3 processes
disk activity:
ok i agree its an old and busy node
What’s in that txt file?
date of purchase and installation to monitor drive failure and usage
I have that too for all parts that goes in nodes… HDD, NAS fans, UPS. But don’t keep that on the storage drive. If it fails, there goes your notes, too.
in my case its only data- db and logs and identity are on separate nvme raid 1
Yep, I had the same, few nodes had like 100% cpu usage and my PC started lagging like hell… I turned them off and added the --retain.concurrency 1 in the command and now I hope it will fix it…
Fun fact, after turning them off I realized I had 500gb paging files… cause when I shut down the nodes, it freed like 500GB on my root drive…
Is this a new thing, or just renamed old node T&C?
Looks like an error in a title. The text starts with “THIS NODE OPERATOR TERMS AND CONDITIONS…” but the title is “Node supplier terms and conditions.” Someone got distracted mid-sentence?
We don’t supply nodes. We operate them.
So it’s ether storage supplier or node operator. Not node supplier or storage operator. Hmm. The latter actually works too
This gets me fired up again .
I got uset to SNO and the forum is full of SNO and I like the “operator” attribute.
I dislike the “supplier” attribute, I won’t use the SNS term and I will reject to be called supplier.
My ears bleed just reading it.
Even Dale Comstock was called “operator” by CIA. If they called him “supplier”, he would took his gun and…
I’m starting to point fingers at the marketing team if this rebranding continues… first the logos were a miss, then they try to change terms. Are they having any IT or node operation background? And I’ll stop at that…
Storage Suplier, aka SS… I will say no more.
I think, just renamed
Last updated: October 26th 2023