How much Nodes can create at one PC?

How I understand in Windows only one node per one PC. But how about Linux?

Mainstream guidance is to use only one node per PC.

Technical general limits is 1 node per physical CPU core. With enough RAM. Limitation is mostly based on your storage IOPS speed / cache strategy.

I’ve been running 3 on a 8/16 core for years just fine to keep storage limits per node smaller. It’s hitting limits on startup, but fine otherwise.

Your response makes no sense to me.

I’d like to know why your experience is so different to mine…

I find that an ancient quad core can run 8 storagenodes with cpu at, well, basically nothing.

Are yours on separate /24s? Mine are all on the same IP

I’m in rural England are you in a more connected area?

What else could it be?

He may be using a PC which he uses and doesnt want to use too much of his CPU .

The limit in windows is just a software thing. Linux on the other hand, you can run as many as your hardware can handle, you just have to run on different ports. Guidelines aside, you can run more than one node per core. Your main bottlenecks which has already been mentioned are IOPS and memory. I’ve run up to 4 nodes per logical core as well as prometheus exporters for each, prometheus, grafana and other unrelated services, and even under heavy load performs fine. Each have dedicated drives / pools for everything though so IO isn’t an issue there. I rarely see the CPU over 15% from Storj nodes alone. Your results may vary. I have sense spread many of these things across multiple servers, but that was more of a best practice approach than a necessity as I’ve also expanded for purposes unrelated to Storj.

1 Like

On Windows you may use a Docker Desktop to run a second and next nodes or @Vadim’s Win GUI Storj Node Toolbox

The recommendation is one node per CPU core, per 1 HDD. There is no specific requirements for memory. Normally the one node uses up to 350MB of RAM. However more RAM is better at least for the disk cache.

2 Likes

I’ve seen much higher RAM usage. Right now I have a ~6TB node using 1.28GB. Current uptime of 204 hours. Seems they creep up real slow over time. Disk usage is minimal. I’m using ZFS with plenty of RAM so maybe it’s just using it because it can.

This could only mean that your disk subsystem is slow. Do you use RAID, BTRFS or single zfs? Is your drive SMR?

See a usage of my three nodes ~9TB total:

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O   PIDS
f41a7362682e   storagenode2   0.59%     160.5MiB / 24.81GiB   0.63%     214GB / 138GB     0B / 0B     86
a73bbdf8a1ed   storagenode5   0.07%     75.89MiB / 24.81GiB   0.30%     2.16GB / 2.23GB   0B / 0B     32

Windows service:

Name                Mem (MiB)                                                                                                                                                                                                         ----                ---------
storagenode                73
storagenode-updater        21

It’s on a single drive ZFS. Drives are 18TB Exos x18. Nodes run in jails in TrueNAS core. Memory usage doesn’t really change relative to bandwidth load on the node/drive. Just creeps that high over like a week. Right now the node dropped back to ~800MB, but the nodes bandwidth usage at the moment is ~1Mb in and ~800Kb out. There is absolutely no way that’s going to bottleneck even an SMR drive.

It’s a little concerning though that there’s so much talk about drive speed and performance issues when on average most nodes are running under 10Mbps most of the time. The highest I’ve ever seen a single node is 98Mbps which is only 12.25MBs… and that’s still rare currently. What happens when load picks up if drives supposedly already can’t keep up?

I’ve never had a need to run more than 3 nodes. My node storage limit strategy is based on the most common value storage limit for a single drive ($150-220/drive) (used to be 8TB, now 14TB). This way in an emergency i can move back to a single disk in the event i need to rebuild my storage array. If i had the need for storage, I’d push it up to 8 nodes on this array, maybe more.

You are not seeing the actual bottleneck, which is not in bandwidth, but in IOPS.

1 Like

Am I missing something?



Unfortunately a single zfs always slower than a similar ext4 regarding IOPS and your node will use more memory for buffers.
Perhaps it’s possible to tune it somehow to perform better but likely in exchange of memory usage or other additional hardware.

I mean… they run fine otherwise. I’ve tried messing with some tuning, but I haven’t really seen much difference other than changing the record size… which was bad when set to high. And I don’t mind the memory usage, I have plenty. Just making the point that some setups can vary a bit.

1 Like

And it’s very easy to build a setup that has high bandwidth, but low IOPS. Not concerning, just a state of fact.

This is my Ubuntu 22.04.2 LTS node with a dedicated 10TB WD RED drive on ext4 - I typically run around 18GB RAM!

That does not look right. This is what I am seeing on my Ubuntu 22.04.02 LTS

CONTAINER ID   NAME                   CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
a1b9f3cab053   prometheus             2.55%     45.7MiB / 7.385GiB    0.60%     5.13MB / 7.24MB   164MB / 9.48MB    15
77225e3aefec   storj-exporter         0.02%     15.11MiB / 7.385GiB   0.20%     14.1MB / 4.91MB   270kB / 156kB     2
e645c335b986   storj-watchtower       0.00%     5.93MiB / 7.385GiB    0.08%     277kB / 0B        442kB / 0B        14
f475ce382a9b   storj                  1.46%     628.4MiB / 7.385GiB   8.31%     19.3GB / 2.02GB   1.07GB / 26.3GB   42

Interesting, my docker stats doesn’t match cockpit, either way I am much higher than your numbers…

CONTAINER ID   NAME          CPU %     MEM USAGE / LIMIT     MEM %     NET I/O         BLOCK I/O        PIDS
43372ab79129   watchtower    0.00%     7.961MiB / 42.99GiB   0.02%     32.4MB / 0B     12MB / 0B        19
1394b4be9637   storagenode   1.79%     6.388GiB / 42.99GiB   14.86%    115GB / 213GB   76.8GB / 152GB   101

This suggests that your drive is slow. Is it SMR?
If yes, it kind of expected.