That’s amazing: congrats! I wish I’d see more posts from SNOs that fill a HDD and sound actually happy about it like you!
Some here are like:
“Yeah, I guess it’s full. Yeah, I guess I’m getting paid. But there’s TTL data in there. And I wish someone would send me a signed letter from the Prime Minister of Canada authenticating it’s all real customer data… and prepaid for 10 years using gold bars. But I guess it’s still OK. Maybe. Kinda.”
Thanks!
I’m just started hosting a node. Very basic setup. But am already looking into monitoring it better. The graphs look simple, but tell everything I would need to know.
I will take a look into it!
Probably there are other people like me who were running collocated hardware in a datacenter that charges for bandwidth usage, with the old usage pattern it was fine. But we have so much bandwidth usage I bought a new server with faster cpu (Apollo 4200 with 28 disks in 2U) and moving to a different datacenter with 10Gbps unmetered bandwidth for the same price as the old place with 1gbps & 40TB bandwidth.
So until I have finished the move I am not adding any more space. The testing is a good way to show what is required of the hardware to plan better what you need to buy / where to put it. E.g. I used to have all disks in a huge ZFS array with L2 cache SLOG and Metadata special device. When I would get high load on the array (all nodes doing file walker) the whole machine just fell over. So I moved to running all nodes with an individual drive and it works perfectly now.
In this moments I realise that I am at this point taking all this ingress thanks to a “bug” in the software.
So the BF was too small for big nodes over 10TB, so my 15TB nodes started to fill up quickly in december and january to the point that it triggered the search for more drives.
And that moment has been the perfect timing: someone just posted on the “local ebay like” site some drives at half price because they had some defects on the case (hits, scraches, etc), nothing to affect the performance of the drive, and they were brand new in sealed bags.
I bought 8 x 22TB drives. He even gave me a discount to already halfpriced drives.
Fastforward, the filled drives started to loose data because of garbage propper removal, and I thought it was a mistake rushing in with the upgrades. But they were half prices so… well.
And now, it proved to be the best move ever, because I am taking all with my 17 nodes running hot.
And all of this thanks to a miscalculated BF.
These fellas don’t specify endurance in the datasheet. You know why? Because its nonexistent. It will fail abruptly and spectacularly any time now.
You would have been better off with RAM disks, if you want to have working databases.
I use similar drive as boot drive for my NAS. I made extra sure that zero writes reach this “disk” and still I completely distrust it. I have a backup copy.
Because I don’t have space in the server for the boot drive. So I could attach sata SSD on usb-to-SATA cable but it’s ugly. So I plugged in this small one directly to an USB port on motherboard. TrueNAS never writes to the boot drive, so it shall last forever.
If it still fails to boot one day — I have a backup.
But I don’t worry at all. I know it’s crap. No surprises there
Was out of options, my potato node 10TB got hit with the timeout error.
It was eihter OS-ssd or this. If it fails it fails. As long as it does not start burning physicaly, it will be fine.
Then i send it to samsung, and will warn everybody here.
Fire extinguisher is around the corner, just in case.
After we have a signed contract how quickly could you spin up new nodes. That is one of the main discussions internally. How quickly can the network react to the news of a big deal. Is a forum post detailing a signed contract even enough, or will SNOs wait until their nodes are actually full before they add more capacity? Obviously these are important questions for us on the capacity planning side.
I think a forum post would help (as some do take this seriously)… but the network will react slowly: at best just making sure fast nodes don’t stay full very long. Most will still wait to fill first… and I doubt many will add throughput (like a faster Internet connection: or control over more /24 IPs).
So available space would expand… but peak speeds would stay the same.
Personally, the issue is that I don’t know for sure what my “steady state” between TTL deletes and ingress will be.
But I, for one, would start adding more capacity as soon as I had confirmation of a deal.
I appreciate, however, that I am unlikely to go above 200TB and that doesn’t even register as a blip on the network so my opinion is mostly worthless
EDIT: you’ll also need to rethink the whole vetting process if you want capacity to come online quickly.
Well, I think the answer is pretty simple. SNOs willing to upgrade will likely wait until the data fills their drives. If you want to change that, don’t just announce “big deal signed”, but tell node operators what to expect. Like “We expect x amount of data to be uploaded in y time. Based on the current number of subnets with free space that is about z amount of data per subnet on average. After that time period we expect TTL to rotate that data, so net growth will return to pre-new customer levels.”
It doesn’t have to be exact, but some ballpark figures would definitely make me feel more confident to add capacity upfront. I can usually add capacity within a few days.
It may be worth spreading the word in other places as well as new SNOs help with both capacity and performance, while existing SNOs are likely to only help with capacity.
If Storj announces a usage bump that’s likely to make new SNOs some money… then YouTube content creators will jump on it for the views. Probably anyone that covered Chia before will pitch is as an alternative. May even get a VoskCoin video out of it