Changelog v1.6.4

So in your example 43.5% dropped and an absolute number of 3440 dropped?

3440 times a random order was dropped. I am not sure about the percentage. I would say with a memory limit of 50KB it is hard to give that percentage any value. We could improve the monkit data to get a better understanding of how many serial numbers have been processed. Is 3340 a high amount or a low amount compared to the total? However with 100KB my node is not dropping any serial numbers which means I would never see the additional monkit data anyway.

1 Like

acting in the dark seems kinda pointless, proper tracking / logging seems like the obvious way to go, having data to predict failure or future problems would also help avoid future issuesā€¦

ofc one wouldnā€™t want to log everything i would assumeā€¦ more like just the amount of dedicated memory and how much is actually usedā€¦ detailed logging could hinder performance i supposeā€¦

but atleast being able to see some basic stats from time to time on fundamental systems of the storagenodes seems quite prudent, atleast in debug mode.

It is also obvious that we have that already in place. You can start digging into it here: Guide to debug my storage node, uplink, s3 gateway, satellite

2 Likes

My question about the percentage was referring to the meaning of this variable in your output.
rate=0.435235
I was wondering whether that meant 43.5% was dropped or anything else? I think that caused some confusion, I may have misinterpreted it.

Iā€™m not worried about this ā€œissueā€ btw. I donā€™t think itā€™ll be a problem, Iā€™m just curious about what Iā€™m looking at.

Yes rate usually translate into percentage.

1 Like

that looks really interesting, i misunderstood your earlier post, sounded a bit like there wasnā€™t enough information, anywaysā€¦

why use 1gb if 100kb is enough thoā€¦ seems immensely overkill, is the memory setting for the serialā€¦thingamajig always allocated memory or will it just leave the memory free if it doesnā€™t need it?

^this

Based on my current test I could set all my 6 nodes to 100 KB per node. For the next few month they would not drop a serial number. If I set the limit to 1 GB all 6 nodes will consume exactly the same amount of memory. I have the memory in my system so I can as well assign it even if it will not be used. The advantage of the 1 GB limit comes into play when a customer is uploading or downloading more then usual. With a 100 KB limit I would start dropping serial numbers. With a 1 GB limit I wouldnā€™t. As long as I have it available I can as well assign it.

Just got updated to 1.6.4 on my linux nodes. No problems so far.

2 Likes

that only leaves the question of why its on 1mb per default thenā€¦ seems kinda on the low end of the spectrumā€¦ 100-256mb shouldnā€™t be unreasonable even on a rpi, even 50mb would still make so much more senseā€¦ i mean who cares about 50mb todayā€¦ especially if itā€™s unallocated when not neededā€¦

ofc there would also be the question of what happens if the memory is set to 1gb but if it runs out at a much lower amount because the system doesnā€™t have memory to spareā€¦

in my case my system shouldnā€™t be able to run out of memory, even tho i have seen it peak at 95% but most of that is the arc and a few vmā€™sā€¦ which the system should release memory from if it needs memory for other stuffā€¦ might have something to do with that iā€™m running 512k recordsizesā€¦

how do i set the storagenode to 1gb for serial db memory?
got 48gb and only plan on running a couple storagenodes maxā€¦ so might as well set it so it has ample to work with.

Check the first post.

But youā€™re really overthinking thisā€¦ Check the monkit stats or let Storjlabs collect the telemetry on this and just leave it be.
As for why you wouldnā€™t assign more, if youā€™re not conservative about assigning RAM use when programming, youā€™re going to get bloated software. This is just one very specific feature. There are many more. Start with something reasonable, see if itā€™s a problem. If it is you can always raise it, but it really looks like it wonā€™t be anyway.

seems more relevant to my node than the avg node, since i donā€™t plan on expanding fleet wise but capacityā€¦ also if it only takes the memory it needs, which little skunk saidā€¦ then i see no reason not to just let it take whatever the storagenode would wantā€¦

but yeah you are most likely rightā€¦ as usual xD i was also kinda thinking the same thingā€¦ but then againā€¦ if i might go into the extreme end of the node size spectrum, it might be wise to tweak some of the relevant settingsā€¦ ofc doing that comes at the risk of running into issues that arenā€™t the same as other nodes, because my settings would be uniqueā€¦ which also is something to account forā€¦ might make troubleshooting hell in the futureā€¦

12 posts were split to a new topic: I have a database is malformed problemā€¦ And it doesnā€™t even say which database is malformed

5 posts were split to a new topic: "Disk space used this monthā€ is 0 despite Egress of 1.9GB and Ingress 14.76GB

5 posts were split to a new topic: Iā€™m seeing some crazy intense high loads from this update

I love the new dashboard in v1.6.4, each release brings great new improvements.

One question I have is the ā€œTotal Earnedā€ amount. I assume this is actually the total amount paid, since the Total Held Amount amount is not yet paid and was earned.

1 Like

has the docker image been released yet

Yes. Released and installed already.

19 posts were split to a new topic: Will it be safe to allocate more space to this node based on my disk utility info presented to me and here

Auto-updated to 1.6.4ā€¦ no issues.

1 Like