Hello to all!

What is the biggest size in TB for a node?

Can I use 32 or 64 TB for a single node?

i’m not aware of anyone going that far… but in theory yes… but it goes beyond the recommended size and thus you could run into unforseen issues… the recommended max is 24tb… so if you wanted to do like say 64tb you might be better off splitting it into atleast two nodes… just to stay close to the recommendations… might save yourself some trouble in the long run…

but on the other hand… you might be fine running 1 x 64tb node

i just ain’t aware of anyone doing that… yet

will also take maybe years to get to 64tb…

Yeah, if you go that route you may want to do thin provisioning so you can overprovision until it’s needed…

There is no maximum.

My 10TB filled up for year

I had seen a user asking for advise with 8PB node (shown on web dashboard)

I considered an 8PB hard drive but decided to go with 800PB instead. I just hope it’s not SMR.

Actually it’s just a bogus setting in my node config. I wonder if that user typed a few too many zeros when setting their storage size.

I am 50-50 on that considering he came from Burstcoin and I have seen insane setups for burst.

If it’s a real setup, I hope those drives have a good warranty because that’s going to take a while to fill. About 300 years?

Almost certainly never. The more data your nodes hold, the more deletes will hit your node as well. And at some point the deletes will become about as large as the incoming data and the used size will no longer grow. It’ll just fluctuate a little. My very rough first estimate is that this will happen at around 40TB.

Edit: I should probably elaborate on that a little. Based on recent months it seems ingress is about 2TB per month on average. About 5% of data is deleted on average per month. 5% of 40TB is 2TB, so at that point those will even out and incoming traffic will roughly match deletes.

That’s an interesting calculation. But shouldn’t supply and demand come into that somewhere? And network bandwidth?

It’s my understanding that the majority of current data is test data. So this could all be subject to change with customer data.

I’m also mining Burst, but 30TB makes < $1/month, so I will gradually move to Storj. I guess the big Burst rigs are finding the same as some other crypto miners: the rigs don’t pay for the electricity they consume.

@Beddhist

the assumption of 2tb ingress on avg is not a certainty…

but so long as it is 2tb then i think bright it right … the deletion rate will most likely be something like 5%

you can just look at yourself… how much of your data do you delete …

5% deletion pr month is very reasonable and likely number from what i can see…

ingress will continually rise if storj becomes a success storjy … ofc if ingress doubles the 40tb max becomes 80tb… so really in theory the higher ingress the higher capacity SNO’s can reach… but apperently right now getting past 40tb is impossible… or highly unlikely without increase in avg ingress…

this would even apply to having multiple nodes i suppose…

maybe this is why they recommend a max of 24tb… because thats what their math says it’s actually possible to reach… from 30 to 40 tb would take forever…

because 2tb ingress pr month… and 5% of 30tb is 1.5tb deleted each month… thus 0.5tb gain pr month so the last 10 tb would take 20 months to fill…

@BrightSilence

thats actually quite interesting…

so the gain at 5% deletions means 0.5 tb lost pr 10tb added…

so first 10 tb takes 5 months…

20TB will add + 7 months

30TB will add + 10 months

40TB will add + 20 months

41tb will add + infinite months…

You’re absolutely right of course. That’s why I said it was a really rough first estimate. There are also aspects like perhaps backups are stored for 1 or 2 years. At which point deletes may go up significantly. It’s obviously not as simple as that calculation suggests. And there may also be data that never gets deleted. So you may see some level of growth always. But I can’t make predictions on data I don’t yet have. So this is the best I can do for now. However, the concept of getting more deletions when you store more data would hold in most scenarios and that probably means your node is unlikely to ever be completely full.

As for supply and demand. The experience so far (also based on V2 network) is that the supply side is rarely a problem. You can safely assume there will always be enough supply and your node will never really become one of the few with space remaining, no matter how big your node is.

I can do you one better. I built this effect into my version of the earnings estimator. So you can see what the first 10 years would look like there now. You actually never fully reach the 40TB as the net increase in data slows the more the deletes start to impact it. It takes almost 4 years to get to over 35TB. But it obviously would never be this neatly the same every month. It would at some point just fluctuate around that 40TB level if the averages remain the same. With some months going over it and some months diving below that. The 5% deletions wasn’t just a random estimate btw, it was calculated based on total ingress since the last network wipe, compared to amount of data stored now and averaged by month. So it’s actually what I saw on my node since beta launch. But yes, things may change a lot on the future. I’ll adjust my estimator when it does significantly change.

certainly an interesting factor to take into account… tho 10 years seems like a bit to long… most hdd’s wouldn’t survive that long… i think something like 4-5 years max would be more realistic… ofc if you have it so it shows it over a sequence… never actually used the earnings estimator yet …

ofc beyond 3-4 years at current numbers a node will stabilize on a monthly earning, atleast from stored data… my node is 5 months in 5 days… so i’m actually at 2.2 tb avg ingress pr month… ofc 1st month might be a unreliable number to include and since i’m only 5 months in the start up / vetting of the node will or might still have a huge impact on my initial averages.

i’m guessing the 5% is on your node… or is that on the entire network…

would be interesting to … hmmmm maybe we can find some real world data… from other data storage systems… ofc that number might also be affected a lot depending on what storj / tardigrade ends up being a primary use case for…

it sure is very interesting and might be very useful for planning storagenodes…

My 12TB Synology filled up in 3 months

800PB = 800,000TB … it doesn’t seem likely to me.

you have excellent timing.

I got lucky, that´s all. But all my other ones are full. Ordered 2 more HDDs to expand:

1 6TB Seagate Exos Enterprise for PT node

1 4TB Seagate for NL node