These fellas don’t specify endurance in the datasheet. You know why? Because its nonexistent. It will fail abruptly and spectacularly any time now.
You would have been better off with RAM disks, if you want to have working databases.
I use similar drive as boot drive for my NAS. I made extra sure that zero writes reach this “disk” and still I completely distrust it. I have a backup copy.
Because I don’t have space in the server for the boot drive. So I could attach sata SSD on usb-to-SATA cable but it’s ugly. So I plugged in this small one directly to an USB port on motherboard. TrueNAS never writes to the boot drive, so it shall last forever.
If it still fails to boot one day — I have a backup.
But I don’t worry at all. I know it’s crap. No surprises there
Was out of options, my potato node 10TB got hit with the timeout error.
It was eihter OS-ssd or this. If it fails it fails. As long as it does not start burning physicaly, it will be fine.
Then i send it to samsung, and will warn everybody here.
Fire extinguisher is around the corner, just in case.
After we have a signed contract how quickly could you spin up new nodes. That is one of the main discussions internally. How quickly can the network react to the news of a big deal. Is a forum post detailing a signed contract even enough, or will SNOs wait until their nodes are actually full before they add more capacity? Obviously these are important questions for us on the capacity planning side.
I think a forum post would help (as some do take this seriously)… but the network will react slowly: at best just making sure fast nodes don’t stay full very long. Most will still wait to fill first… and I doubt many will add throughput (like a faster Internet connection: or control over more /24 IPs).
So available space would expand… but peak speeds would stay the same.
Personally, the issue is that I don’t know for sure what my “steady state” between TTL deletes and ingress will be.
But I, for one, would start adding more capacity as soon as I had confirmation of a deal.
I appreciate, however, that I am unlikely to go above 200TB and that doesn’t even register as a blip on the network so my opinion is mostly worthless
EDIT: you’ll also need to rethink the whole vetting process if you want capacity to come online quickly.
Well, I think the answer is pretty simple. SNOs willing to upgrade will likely wait until the data fills their drives. If you want to change that, don’t just announce “big deal signed”, but tell node operators what to expect. Like “We expect x amount of data to be uploaded in y time. Based on the current number of subnets with free space that is about z amount of data per subnet on average. After that time period we expect TTL to rotate that data, so net growth will return to pre-new customer levels.”
It doesn’t have to be exact, but some ballpark figures would definitely make me feel more confident to add capacity upfront. I can usually add capacity within a few days.
It may be worth spreading the word in other places as well as new SNOs help with both capacity and performance, while existing SNOs are likely to only help with capacity.
If Storj announces a usage bump that’s likely to make new SNOs some money… then YouTube content creators will jump on it for the views. Probably anyone that covered Chia before will pitch is as an alternative. May even get a VoskCoin video out of it
Isn’t that what the current load is doing? It is way more precise that any number we could give you. And to make your estimation as acurate as possible I can tell you that we have uploaded maybe 75% of our target. Hard to tell at the moment because of the trash folder but still good enough for your calculations.
That certainly helps. But only for SNOs who had free space during the full testing period. Which happen to be the nodes who likely don’t need to expand right away. Although the overwriting thing kind of throws a wrench into the numbers as well. There’s nothing wrong with mentioning that that is the best way to gauge it, but I still suggest to add estimates.
I forgot to mention that what would also help a lot is if GC is more reliably cleaning stuff up. I still have nodes with 25-30% of data being uncollected garbage. I have much more capacity than is being usefully used right now. Should I count that towards the test data as it’s caused by overwritten segments or not?
We can’t. I can give you some total numbers. Lets say our target would be to upload 10 PB. Does that mean 10PB*1.875/20K nodes? That is a TB per node so why has my node grown way more than 1 TB? You see this math doesn’t work. I can’t estimate how big your node will grow.
The guess I’m going with is I have about 2 more weeks of growth like the last 2 weeks… then all the TTL data will start to wrap… and I’ll level-out. After that I’m back to natural slow growth.
If you’re looking at the graph most of us check: it’s counting slightly different things so we’re closer to 2/3rds full (assuming real customer data uses about 2.2x the free space when uploaded). Perhaps when capacity-reservation is complete we’ll be near 3/4?