RAM requirement

Hi All,

I am planning to invest in a RPI4, but it is available with 1,2,4,8 GB RAM.
Would more than 1GB RAM yield better results? (Eg. faster data access, faster upload, more payout)
Would a larger HDD require more RAM on-board? (Eg. caching/indexing)

Thank you in advance

I have two RPi 4s, both 4GB RAM versions. I know others on here have run nodes on RPi3Bs, which only have 1 GB of RAM. So theoretically the 1GB version (which are actually discontinued now) could work.

I think the bigger question is whether you’re planning on running the node with the full Raspbian OS GUI (desktop and icons and such) or if you’ll run with the CLI only version. The GUI version will use quite a bit more of the system’s resources. On both of my nodes I do run the GUI versions, and from time to time I do notice that the available RAM tends to slowly decrease over time (days). It appears that most of the RAM is slowly eaten up by cache., which does tend to get cleared every so often.

Also, do you think you’ll ever want to run more than one node on the same Pi? I know I’ve had folks tell me that it would certainly be possible to run both of my nodes on the same RPi, although for personal reasons I like to keep them separated.

If you’ll use the CLI OS version, then 2GB version would be great. If you want to use the GUI version of the OS, I’d recommend the 4GB version, but if cost isn’t a deciding factor, why not just go with the 8GB version?

1 Like

looks to me like each additional node will require like 100-200mb of ram… storj documentation say 1gb i think…

however i have seen and actually do see my storagenode use more than 1gb of ram fairly often… it’s been something i’ve been trying to fix… might be due to increased latency and that it takes a long long time before the ram number drops down again… most likely due to it being allocate for the storagenode, but doesn’t mean it’s being used…

max ram usage i’ve observed have been 2.5-3gb and my storagenode has 14tb data stored…
i think storjs 1gb pr storagenode is a wise choice… so depending on how many drives you want to run off the RPI i would recommend you have an equal number of GB ram… as the minimum

also keep in mind, you cannot upgrade the ram later… so you better be sure you get what you need, so long as prices is about the same… and from what i’ve seen the prices doesn’t seem to vary to much depending on how much ram it has…

one would usually use 1 hdd pr node, thus 1gb pr hdd you expect to use would be the recommended amount…

imo i would just floor it on the ram… i can’t say i ever had a computer where i thought i’m so happy i didn’t get more ram… tho i have repeatedly been limited by what i could do because of ram…

running out of ram slows down stuff immensely and causes all kinds of problems… while an added expense on extra ram is in this case maybe 20$
and then you know it will be very viable in the future for other projects even if it cannot compute much, it can still be useful…

1 Like

Im running a 2gb model running 2 nodes only using 400mb of ram. You could technically host a single node with 512MB of ram so 2gigs is plenty of ram for hosting nodes on anything more then that is just overkill.

image

well my storagenode sure uses its extra ram for something…
i duno why it does this… but it happens, not sure if it needs the extra ram either…
i am migrating the node away from this pool, and there does seem to be a bit of an added activity spike at that point… might have been when i updated to version 1.13.3 actually…

yeah this was triggered by me booting the node after the update, which made it run the filewalker and do trash collection… which apparently used that… these functions usually takes about 80 minutes
from a cold start and might become slower with less ram…

Theres many thing that could cause more ram use-age for your system.
But Im running 2 different systems both running 2 nodes each using 400mb of ram.
image
Mind you my systems are both baremetal and not running on a VM as well nothing special.

this is docker on baremetal, my avg ram utilization is like below 100mb… on the storagenode
i have one i’m testing in a container with docker… also seems to be around the 100mb mark… tho it does seem to use a bit more memory

Here you are a STORJ with 24GB of RAM.
Used: 495,3MB
System cache: 23262MB
Free: 222,8MB

This server only have STORJ.

root@storj:~# cat /proc/meminfo
MemTotal: 24555588 kB
MemFree: 206532 kB
MemAvailable: 23710072 kB
Buffers: 750616 kB
Cached: 21793864 kB
SwapCached: 8 kB
Active: 8871076 kB
Inactive: 13934924 kB
Active(anon): 125452 kB
Inactive(anon): 123528 kB
Active(file): 8745624 kB
Inactive(file): 13811396 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 25035772 kB
SwapFree: 25034236 kB
Dirty: 2876 kB
Writeback: 0 kB
AnonPages: 261564 kB
Mapped: 139168 kB
Shmem: 17164 kB
Slab: 1423884 kB
SReclaimable: 1299604 kB
SUnreclaim: 124280 kB
KernelStack: 4000 kB
PageTables: 2560 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 37313564 kB
Committed_AS: 1552648 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 2784 kB
HardwareCorrupted: 0 kB
AnonHugePages: 120832 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 147884 kB
DirectMap2M: 24889344 kB

More out of curiosity, what OS and from your image, I’m assuming it’s a CLI version?

thats interesting, doesn’t lock up the memory tho… just uses it as cache which is good… atleast then its of some useage

my server will usually throw 50% ram towards the zfs arc which is also essentially a cache…
if anything requests memory the ARC will drop data and let the memory be allocated towards that purpose.

my storagenode may be doing something similar to yours… simply using memory because it’s there…
i got 48gb so a few gb of memory usage doesn’t really change much

1 Like

“buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping.

1 Like

It is Ubuntu Server, and correct it has no GUI

i was trying out ubuntu server a little while back… didn’t want to let me run nested docker…
so spent 4-5 days trying to figure that one out… so it didn’t stick…
im of the firm belief that an OS should help me do stuff…
granted ubuntu could be made to work… was something stupid like one had to disable ipv6 or something because it was getting in the way of something else…

but still didn’t get any points from me…

Which version were you trying to run? I have no issues what so ever didnt disable anything, was 18.04.5, Ran pretty easy, But I do not tinker much with the systems I have I am a firm believer if it isnt broken dont fix it. That is when things break.

i would have been running 20.04 or whatever its called and i was running ubuntu in a container and then docker inside it, which made it go slightly crazy… was ubuntu server tho…
i went back to debian and it worked in the first go… so didn’t want to start hunting down the problem

Yeah I can’t say version 20.04 was a great start for you it wasn’t good, 18.04 had alot less issues. I will stay on 18.04 as long as I can before im forced into upgrading.

it’s with good reason most enterprise releases often lag way behind… then there is a chance to get all the weird annoying mistakes and oversights ironed out.

Yes exactly, 18.04 has matured then 19 then 20 came out… None of which I would bother to try at least a few years from now. 18.04 is still getting support. It has been very stable and I dont want to change anything, now everytime I ssh into my machines now telling me to update to 20…

Over here, I run:

  • Debian Stable for servers.
  • Debian Testing for desktops.

This has worked great for me for 15 years. Running Debian Testing on all my desktop allows me to ensure I know what changes I will need to re-baseline my servers when it’s released as Stable.

It’s a bit annoying though, because some really important features appear in things like psql, postfix, and apache between releases… and enabling backports sometimes screws up dependencies. Migrating from PHP 5 to PHP 7 was a real pain for me.