How do Rpis deal with the much memory load

how do Rpis deal with the much memory load

rpis dont see this much memory usage.

this memory usage is Docker related only

high memory usage is usually down to the hdd being overloaded and then the system starts to buffer excess data in memory…

so it’s a matter of making your disk faster…
you might be able to add a second node on a second disk and actually lower your memory usage if you cannot optimize disk io / bandwidth in other ways.

nominal memory usage for my node is like 50-300mb and avg is in the 100mb range
300 was one time when we had like 5-6mb/s ingress i think

and my node is + 10 TB
not sure how much that affects memory usage, but it sure doesn’t help… since the database and what not will become larger as a node grows

From my raspi:

docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             BLOCK I/O           PIDS
7cb299f04dc8        storagenode         0.00%               31.08MiB / 800MiB    3.89%               0B / 0B             0B / 0B             27
2def423854a5        netdata             15.25%              98.2MiB / 926.1MiB   10.60%              5.23MB / 7.48MB     0B / 0B             38
fa09bcc37eff        watchtower          0.00%               1.73MiB / 926.1MiB   0.19%               3.34MB / 2.04kB     0B / 0B             10

From the Windows docker node

CONTAINER ID        NAME                 CPU %               MEM USAGE / LIMIT     MEM %               NET I/O          BLOCK I/O           PIDS
cc91a9006ec6        storagenode          0.00%               31.71MiB / 7.786GiB   0.40%               27.9MB / 347MB   2.57MB / 0B         14
c18ef5c86026        watchtower           0.00%               4.832MiB / 7.786GiB   0.06%               8.97kB / 0B      13MB / 0B           11
1 Like

I see the problem. You are missing netdata :smiley:

2 Likes

my netdata just died… for like the 7th time in under 5 months…
starting to piss me off… getting pretty good at installing it tho… lol

i kinda get the sense that netdata is pretty poorly programmed.

how does netdata die? Let me take a guess you can’t leave good enough alone.

1 Like

i just ran some awk scripts i think… might have been an apt update that did it tho… needed to install stuff to get awk to do what i wanted…

it’s seems quite sensitive to all kinds of things… last time it was because i installed and uninstalled nvidia drivers, the time before that it was because the server was unstable and kept rebooting at random for a few days… like 40 times… that seemed to damage netdata and i ended up having to reinstall it…

it’s been a recuring theme that netdata dies, now i even set the dataset it’s on to run sync always… and still it died…i like its features… but it’s not very durable…i have that effect on software… xD

i don’t see why netdata needs to break when i purge nvidia drivers when the drivers was installed after netdata and i never even had a nvidia graphics card in the server before… so really weird thing… imo…
the purge did tell me that it might break stuff tho…just didn’t believe it lol

2 Likes

Those are some quiet nodes- the storage node at both of my locations always seems to hover around 1-3% unless I’ve got several Egress requests going and then it might pop to 7%.

Edit: context helps- fiber node is a Xen VM on E5645’s with LVM’d vhd’s supported by a SAN over iSCSI (5x 2TiB - overhead an all), Coax node is LXC container with docker inside on i5-4560 with direct attached 12TB ZFS mirror passed through.

keep in mind that the more nodes on a ip/24 the less work for each node, and thus less disk latency which leads to less memory usage… and ofc less cpu usage per node…

so comparing node to node isn’t always an equal measure.

Does any particular node know if it is or is not on the same network as any other?

you can check the map on here… seen if any are close by… tho in some cases you might just get your national isp main datacenter address…
http://storjnet.info/

another easy way is to compare ingress… ingress is very accurately divided between ip/24 subnets, thus if you know what ingress has been for a certain day, then you can see if its 1/2 then there is two nodes on the network… 1/3 and there is 3… so on and so forth…

you should on most days be able to distinguish somewhere between 19-20 and even 39-40 nodes on a network… by this rather crude method… because ingress is highly accurate if the node has no limitations or issues…

we have a bandwidth thread going where people post dashboard screen shots.
this is my node which is usually a fairly accurate measure…

so your total ingress for the 13th should be about 36 GB … if you get 15-17GB total ingress then there will be a second node on your ip/24 subnet …

total ingress is(normal + repair else it will not be accurate)