I think we have around 6000 nodes with free space? And many of them will fill (and not expand) while capacity-reservation data is going in? Sounds like an opportunity to me!
20 PB * 1.875 (if we keep the current RS numbers) + 10 PB * 2.25 (assuming we use the default RS numbers)
WOW thatâs almost a Double Big Mac.
20 PB * 1.875 * 1024 / 6000 = 6.4TB
9.6$ per node if no new nodes come online and no current node expands.
So⌠that can be closer to 60PB on-disk. And if we start with 6000 nodes with space⌠and assume half fill⌠then maybe assume 4500 nodes share that 60PB? That would be around 13TB+ each⌠so about an extra $20/month.
And all I have to do is watch my disks fill? I take that extra $20!
We discussed in the other thread that this math doesnât work. For myself it looks more like I end up with 30-50 TB. I guess there are just too many slow nodes out there and I get the data they canât store. Donât ask me why. I take the payout and will not question it any further.
That is why I post here, tell me what the âcorrect mathâ is.
You think you get 10 times the amount of other nodes?
I donât even doubt that, my guess is that VPN nodes and nodes will go down bad
Thereâs wisdom in that approach: Iâve used it many times!
The best way to find an answer on the Internet isnât to ask for one.
Itâs to post an incorrect answer. People may ignore your pleas for help: but theyâll never turn down an opportunity to show you youâre wrong
psssst⌠that is my trick and why I pull numbers out of thin air
Setup a node, measure the grow rate for your setup and than you can calculate how big your node will grow in 30 days. After 30 days the TTL kicks in and the node will stabilize / keep about the same size.
Interesting. So your estimate is that since my ISP performance is consistent, that the node will stay at the current level?
So not really any real data that is saved for longer?
It also would tell me that growth for me is personally over and I have achieved my limit at 24TB.
It is getting late over here. I donât want to answer the same questions over and over again. Instead I will just point out that you mentioned a number that is higher than your previous calculation. Thanks for proving my point
These arenât per month uploads, they are total storage targets that will be adjusted when our growth rate targets or pipeline changes.
Ironically that reminds me of another question I had about the new node requirements.
1.5 TB of transit per TB of storage node capacity; unlimited preferred
That IS per month, right?
- 1.5 TB of transit per TB of storage node capacity
Time unit is missing.
- Uptime (online and operational) of 99.3% per month, max total downtime of 5 hours monthly
Well, that will get rid of my nodes if they will get disqualified after a nightâs worth of downtime. Tough luck, I can probably start gracefully exiting them.
Theyâll go into suspension before disqualification.
5 hours of downtime means a node wonât even survive being down while youâre at work during an 8-hour shift. Is every SNO now going to need remote-access to fix issues during their coffee break?
No way will they do anything just dipping under 99.3%: or theyâd DQ half their nodes every month. Unless that high bar just impacts node-selection for ingress?
How much synthetic data in PB do you plan to roll out, since this data isnât backed by real customer money, and paid in storj token, depending on the amount, this will dilute the storj token price (in my opinion), ending up in a race who converts their tokens faster to get the best price at the beginning of the month.
Can we change the (âmoney_mouth_faceâ) to :roxor_face: ? Since it looks like he owns it
Pretty sure this was always the listed requirement. I doubt theyâll change the suspension level as it hasnât been a problem. But thereâs value in telling node operators the requirement is higher.