It is also obvious that we have that already in place. You can start digging into it here: Guide to debug my storage node, uplink, s3 gateway, satellite
My question about the percentage was referring to the meaning of this variable in your output.
rate=0.435235
I was wondering whether that meant 43.5% was dropped or anything else? I think that caused some confusion, I may have misinterpreted it.
I’m not worried about this “issue” btw. I don’t think it’ll be a problem, I’m just curious about what I’m looking at.
Yes rate
usually translate into percentage.
that looks really interesting, i misunderstood your earlier post, sounded a bit like there wasn’t enough information, anyways…
why use 1gb if 100kb is enough tho… seems immensely overkill, is the memory setting for the serial…thingamajig always allocated memory or will it just leave the memory free if it doesn’t need it?
^this
Based on my current test I could set all my 6 nodes to 100 KB per node. For the next few month they would not drop a serial number. If I set the limit to 1 GB all 6 nodes will consume exactly the same amount of memory. I have the memory in my system so I can as well assign it even if it will not be used. The advantage of the 1 GB limit comes into play when a customer is uploading or downloading more then usual. With a 100 KB limit I would start dropping serial numbers. With a 1 GB limit I wouldn’t. As long as I have it available I can as well assign it.
Just got updated to 1.6.4 on my linux nodes. No problems so far.
that only leaves the question of why its on 1mb per default then… seems kinda on the low end of the spectrum… 100-256mb shouldn’t be unreasonable even on a rpi, even 50mb would still make so much more sense… i mean who cares about 50mb today… especially if it’s unallocated when not needed…
ofc there would also be the question of what happens if the memory is set to 1gb but if it runs out at a much lower amount because the system doesn’t have memory to spare…
in my case my system shouldn’t be able to run out of memory, even tho i have seen it peak at 95% but most of that is the arc and a few vm’s… which the system should release memory from if it needs memory for other stuff… might have something to do with that i’m running 512k recordsizes…
how do i set the storagenode to 1gb for serial db memory?
got 48gb and only plan on running a couple storagenodes max… so might as well set it so it has ample to work with.
Check the first post.
But you’re really overthinking this… Check the monkit stats or let Storjlabs collect the telemetry on this and just leave it be.
As for why you wouldn’t assign more, if you’re not conservative about assigning RAM use when programming, you’re going to get bloated software. This is just one very specific feature. There are many more. Start with something reasonable, see if it’s a problem. If it is you can always raise it, but it really looks like it won’t be anyway.
seems more relevant to my node than the avg node, since i don’t plan on expanding fleet wise but capacity… also if it only takes the memory it needs, which little skunk said… then i see no reason not to just let it take whatever the storagenode would want…
but yeah you are most likely right… as usual xD i was also kinda thinking the same thing… but then again… if i might go into the extreme end of the node size spectrum, it might be wise to tweak some of the relevant settings… ofc doing that comes at the risk of running into issues that aren’t the same as other nodes, because my settings would be unique… which also is something to account for… might make troubleshooting hell in the future…
12 posts were split to a new topic: I have a database is malformed problem… And it doesn’t even say which database is malformed
5 posts were split to a new topic: "Disk space used this month” is 0 despite Egress of 1.9GB and Ingress 14.76GB
I love the new dashboard in v1.6.4, each release brings great new improvements.
One question I have is the “Total Earned” amount. I assume this is actually the total amount paid, since the Total Held Amount amount is not yet paid and was earned.
has the docker image been released yet
Yes. Released and installed already.
19 posts were split to a new topic: Will it be safe to allocate more space to this node based on my disk utility info presented to me and here
Auto-updated to 1.6.4… no issues.
3 posts were split to a new topic: Is it just me or is the logs basically without errors in the new version…?