Maximum storage size as of July 2024

Hi, I want confirmation from admin(s) on the maximum storage size for a node. I ran a node for more than a year but it never exceeded 3.6 TB though my scores were almost always over 90%. It was a real waste since nearly 4 TB was never used. Thanks.

Hello @pentapower,
Welcome back!

There is no maximum size limit. The usage depends on the customers, not on the hardware or software settings. There is a virtual limit - when the amount of the uploaded data become almost equal the amount of the deleted data, I do not know what’s current equilibrium point, but my nodes on 9.28TB are full.
However, if your node loses races more often - it likely would be less used.
You may check the success rate for your node:

Please also make sure that your stat on the dashboard is matches usage on the disk.
See Disk usage discrepancy? and Avg disk space used dropped with 60-70% for possible issues.

There are people here running 50TB nodes, and others that run multiple nodes that together are 1+PB. So there’s a lot of data out there for you!

But yes, before the recent deluge of test data, it could take you 2-3 years to fill something like a 10TB HDD. If you haven’t run a node in awhile it’s an excellent time to restart: as there has been a firehose of TTL data coming out for the last week. Welcome back!

I would not recommend to start new nodes because of test data. The real user data is constant at best and those tests can be ended any time.

Also I have never seen so many node problems. Read the forum carefully before starting new nodes.

1 Like

Thank you all for your replies. I’ll check again and wait for some time.

dumb question but how much space did you have allocated? if 4TB, then only really using 3.6TB sounds believable, the node usually doesn’t fill completely especially with trash.

or did you have like 8TB allocated? In which case in the ‘old days’ we’d expect your data to gradually grow, and then in the last couple of months it would have ballooned with all the test data blowing around. (unless something was throttling it on your node)

1 Like

I believe my node is throttled. It always stops at 3.33 TB while the allocated space is 6.61 TB and it’s happening right now. I’ll wait until the end of this month, if no change, I’ll never host any nodes again!

We do not have such code in our codebase. But it’s possible, that this amount is an equilibrium point for your node (when the amount of the uploaded data almost equal to the amount of the deleted data). It’s not fixed and vary between periods of time and depends on the node’s location and the customers’ activity.
For example, all my nodes are always full (9.28TB in total).

This sounds very similar to another user…they were using NTFS if I remember right?

is that 3.3TB as reported by the operating system?

maybe the operating system is full but the reported size is waaay lot. (restart the node and give it a long time to finish the used space filewalker)

I’ve gotten a single disk up to 10TB so far, and I have a few disks on this machine.

There is no risk for the network to concentrate 1 PB at a single place ? on a single machine ?
And for the operator any risk to be banned ?

Storj has certain node-selection criteria (like separate /24 nodes storing pieces) to reduce the chance too many pieces maybe be on hardware that can fail at the same time. But ultimately nobody can tell if nodes are on independent systems or not. Like two nodes using IPs that geolocate to opposite sides of the globe… could be in two docker containers right next to each other.

And there are exceptions: like SNOs participating in Select don’t have the /24 limit applied. But when you apply for Select they make sure you’re running more reliable configs.

Obviously unreliable nodes can still fail audits and become permanently disqualified. But if they are reliable… there’s no reason large SNOs can’t get good chunks of all the capacity-reservation data being sent now. The repair system has been working very well: I don’t think any customer data has been lost - and they trust it enough it should handle an entire country dropping off the Internet. One SNO going offline… even an large one… is no big deal.

1 Like