The node is not fully utilizing the allocated disk space

I’ve been running the node for about half a year with 7 TB of disk space allocated for it. However, disk usage constantly floats at around 3.6-3.7 TB and never increases.

Why isn’t my node runnung at full capacity?

Hello @cameloid124,
Welcome to the forum!

Perhaps your node is reached the equilibrium point, when amount of uploaded data and amount of deletions almost equal.
The load depends on the customers, not hardware or software options, except edge cases, described there on Step 1. Understand Prerequisites - Storj Docs.

Pardon my ignorance, but isn’t the network interested in making the fullest use of all available resources for performance and redundancy?

Let me explain it in a simpler way: there’s not enough customers but there’s so many SNOs > not enough data for each node.

That’s it.

4 Likes

My nodes increased usage this month, so I think yours should have too.
Do you have fast enough internet?
Is your node sharing a so called /24 section of the Internet with other nodes?
Your trash is low which suggests that you are not receiving of good share of uploads, are you in a remote region?
Are you filling your disk but with small files so that the node can only fit that much on the disk, does OS picture match node picture?
Has your node been full and only just cleared up space?

Could I suggest you start with the success rate script and check the OS disk space report first?

1 Like

This node in Singapore has a dedicated 10 Gbit/s uplink connection, a unique IP address, and a storage volume dedicated to the node exclusively (I do not use it for anything else).

Yes, it’s an important detail that my node was previously full. However, most of the data eventually became thrash and was deleted, leaving a significant amount of free space that hasn’t changed since.

successrate.sh output is below, although I restarted the node recently, so it might not be very infomative:

========== AUDIT ==============
Critically failed:     0
Critical Fail Rate:    0.000%
Recoverable failed:    0
Recoverable Fail Rate: 0.000%
Successful:            358
Success Rate:          100.000%
========== DOWNLOAD ===========
Failed:                237
Fail Rate:             3.961%
Canceled:              13
Cancel Rate:           0.217%
Successful:            5733
Success Rate:          95.822%
========== UPLOAD =============
Rejected:              0
Acceptance Rate:       100.000%
---------- accepted -----------
Failed:                100
Fail Rate:             0.559%
Canceled:              91
Cancel Rate:           0.508%
Successful:            17711
Success Rate:          98.933%
========== REPAIR DOWNLOAD ====
Failed:                0
Fail Rate:             0.000%
Canceled:              0
Cancel Rate:           0.000%
Successful:            94
Success Rate:          100.000%
========== REPAIR UPLOAD ======
Failed:                4
Fail Rate:             6.557%
Canceled:              1
Cancel Rate:           1.639%
Successful:            56
Success Rate:          91.803%
========== DELETE =============
Failed:                0
Fail Rate:             0.000%
Successful:            0
Success Rate:          0.000%

df:

Filesystem                         1K-blocks       Used  Available Use% Mounted on
tmpfs                                1562672       2240    1560432   1% /run
efivarfs                                 192         40        148  22% /sys/firmware/efi/efivars
/dev/mapper/ubuntu--vg-ubuntu--lv   59643812   27733332   28848312  50% /
tmpfs                                7813360         12    7813348   1% /dev/shm
tmpfs                                   5120          0       5120   0% /run/lock
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-journald.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-udev-load-credentials.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-sysctl.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-tmpfiles-setup-dev-early.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-tmpfiles-setup-dev.service
tmpfs                                7813360        312    7813048   1% /tmp
/dev/sda2                            1992552     189156    1682156  11% /boot
/dev/sda1                            1098628       6272    1092356   1% /boot/efi
storj                             7648921344 3928868352 3720052992  52% /mnt/storj
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-tmpfiles-setup.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-networkd.service
tmpfs                                   1024          0       1024   0% /run/credentials/systemd-resolved.service
tmpfs                                   1024          0       1024   0% /run/credentials/getty@tty1.service
tmpfs                                1562672         12    1562660   1% /run/user/1000

I think that may cause some of the issue. Singapore isn’t remote but it might be remote from most of the data in US and EU.

Hopefully you would see some gradual, if small, increase over the month. Like this :


and the trash shows the “turn over” and so is sort of a kind of proxy for how much your node is “in the game”, so to speak.

2 Likes

Yes, it’s. However, the usage depends on our customers. If they want to use the network and your node is fast enough for them - it will get uploads, otherwise it would land on the nodes, which were faster than yours.
My nodes didn’t reach the equilibrium point yet, I have used ~7TB/9TB and still growing in ~10GB/day, but I think it will stop around 7TB for a while at least for that month.

2 Likes

Last month has been the lowest monthly increase… for at least a year… but still what we’re seeing is all down to customer demand. And it feels like trash-used-space will always sit around 5% of real-used-space (as a long-term average - if Storj isn’t doing any special cleanups).

TL;DR; Your used-space growth looks normal

What can happen is that deletion of data from nodes can be a slow process, depending on a lot of factors. We had quite a bit of temporary test data uploaded a couple months ago which had various expirations. You are probably seeing some of that data fall off your nodes which is being replaced by new customer data which is making it appear as though you have not gained new data. Eventuallly this old data will be deleted and you should gain data again.

There is a theory among some SNO’s that nodes can reach a high water mark where uploads and deletes reach equilibrium. This is just speculation as it all depends on what the customers are doing. Customers, especially large customers, tend to upload their data sets at specific times known to them. This can be a little or a lot. We have no control over what they do and typically are not advised on what they are doing or when.

1 Like

Hm, I cannot confirm that. My nodes emptied to 4TB from 9TB and gained back about 3TB.

It’s normal. I have 2-3 years old nodes and going over 10tb is quite difficult.
Too many nodes in the network (Chia nomads searching for new home)

4 Likes

Add to the mix that a node will slow down as it fulls up. Not necessarily significantly but they certainly don’t speed up. Which means they may grow more slowly as they fill.

1 Like