WOW that’s almost a Double Big Mac.
So… that can be closer to 60PB on-disk. And if we start with 6000 nodes with space… and assume half fill… then maybe assume 4500 nodes share that 60PB? That would be around 13TB+ each… so about an extra $20/month.
And all I have to do is watch my disks fill? I take that extra $20!
We discussed in the other thread that this math doesn’t work. For myself it looks more like I end up with 30-50 TB. I guess there are just too many slow nodes out there and I get the data they can’t store. Don’t ask me why. I take the payout and will not question it any further.
There’s wisdom in that approach: I’ve used it many times!
The best way to find an answer on the Internet isn’t to ask for one.
It’s to post an incorrect answer. People may ignore your pleas for help: but they’ll never turn down an opportunity to show you you’re wrong
Setup a node, measure the grow rate for your setup and than you can calculate how big your node will grow in 30 days. After 30 days the TTL kicks in and the node will stabilize / keep about the same size.
It is getting late over here. I don’t want to answer the same questions over and over again. Instead I will just point out that you mentioned a number that is higher than your previous calculation. Thanks for proving my point
These aren’t per month uploads, they are total storage targets that will be adjusted when our growth rate targets or pipeline changes.
Ironically that reminds me of another question I had about the new node requirements.
1.5 TB of transit per TB of storage node capacity; unlimited preferred
That IS per month, right?
- 1.5 TB of transit per TB of storage node capacity
Time unit is missing.
- Uptime (online and operational) of 99.3% per month, max total downtime of 5 hours monthly
Well, that will get rid of my nodes if they will get disqualified after a night’s worth of downtime. Tough luck, I can probably start gracefully exiting them.
They’ll go into suspension before disqualification.
5 hours of downtime means a node won’t even survive being down while you’re at work during an 8-hour shift. Is every SNO now going to need remote-access to fix issues during their coffee break?
No way will they do anything just dipping under 99.3%: or they’d DQ half their nodes every month. Unless that high bar just impacts node-selection for ingress?
How much synthetic data in PB do you plan to roll out, since this data isn’t backed by real customer money, and paid in storj token, depending on the amount, this will dilute the storj token price (in my opinion), ending up in a race who converts their tokens faster to get the best price at the beginning of the month.
Can we change the (“money_mouth_face”) to :roxor_face: ? Since it looks like he owns it
Pretty sure this was always the listed requirement. I doubt they’ll change the suspension level as it hasn’t been a problem. But there’s value in telling node operators the requirement is higher.
Max 5 hours downtime was a requirement even in 2021 when I started. Nothing new here.
Hiya @Bryanm - Exciting, glad things are going forward as intended
For the 100th time: storj is not an investment. It’s a utility token. You receive your payout, you immediately sell. There is no speculation about price going up and/or down. You sell irrespective of the current price.
Everyone can do what they want with it. Even if I got paid in USD I could choose whether to exchange them to EUR immediately or wait for a better exchange rate.
Look at it like this:
- “Stop banging your head against the wall or you’ll get hurt.”
- “You can’t tell me what to do! I’ll do what I want!”
- “Bang harder then.”
Please can we not derail this topic?