if the node has some data and has been operating for a while and is getting through the holdback period it would be a shame to “dismiss it” in the sense of quitting or a graceful edit.
You could always just limit the ingress on the more or less inactive nodes.
i had a small node on a drive that i later decided I needed to retire. I set the node size to the recommended minumum of 500GB. I rsync’d all the data over to another drive that I was keeping. Now… i’m technically running two nodes on one drive, which is against the rules, but one of them is this semi-dormant node which I’m keeping around in case I need to expand to another drive.
Unfortunately it requires an extra engineers’ time, which is not free.
You may take the existing stat from your own nodes and estimate it, or from the public stat available on the Community StorjStats or on the source: https://stats.storjshare.io, please also take a look on
it may help to estimate better.
Basically - for Storjlings it’s hard to do for your nodes without a deep analyze and using a lot of team resources, so no go, I’m sorry.
It never crossed my mind that storj should/could calculate for each individual node the amount of data that would be deleted.
That said, I also missed the info “1 PB on EU1 to be removed”.
But the graphic, only recently shown, “paid/unpaid data” could and should have been shared in January.
No. I’m surprised they showed us in the recent update: as a private company (with so many S3 competitors) letting those competitors take a peek at unpaid usage doesn’t help you.
They probably hoped a graph like that would result in less crying in the forum. And yet… some just cry that they didn’t get it sooner…
If it made sense storj would not need you. They would run nodes themselves.
It would cost them astronomically more than it costs to us, even if we run dedicated nodes. They would need to have their own distributed datacenters or - even worse - buy storage from others.
We do know that payments for select network operators are smaller per unit than for the community network.
I would contest that it would be “astronomically” more costly.
Firstly, supporting a community-based network also has costs. You need engineering time to build storage node software to much higher standards than what you’d need if you had full control over deployment. This covers making sure that storage node can run on potato hardware like RPis, on embedded environments like NAS (from many vendors!), on Windows—each having its own quirks. You need user interface simple enough that node operators do not need much training. You need to set RS parameters high enough to deal with unpredictable node churn and operators who cheat by setting up shared storage on multiple IP/24 blocks. You need to use or maintain a worldwide micropayment network. And you need a support forum and deal with node operators that occasionally threaten to strikes.
All of this work needs to be funded on top of community network payments.
Secondly, there are cloud vendors that do offer non-redundant storage at cheap enough prices that you’d turn profit even with the current community network payments. I am myself familiar with a local German provider (which has recently became a direct competitor, so I won’t name them here, but you can look at my post history for some profitability statements), which does currently offer decently reliable bare metal machines at close to 1 USD/TB/month. To take advantage of these offers, Storj would need to set up bare metal monitoring infrastructure, automate node deployment and establish network rules that would ensure utilization.
My belief here is that explicitly dealing with dozens of such providers while taking control over the OS-level setup would likely be comparable work to maintaining the current community network, given that a lot of the setup work could be shared while significantly minimizing software-level deployment differences. I’ve been maintaining a similar setup at my previous workplace, with deployments spanning… I recall it was eight different cloud and bare metal environments, with still plenty of time left for other tasks.
The resulting network would likely have different operational parameters, costs, and risk profile, sure—better for some use cases, worse for others. But so far I have not seen any indications that it wouldn’t work.
Does it offer the same level of redundancy and speed?
Maybe it does not cost astronomically more, but why pricing it at the double, if the Select paradigm is less expensive? Maybe to not piss off node operators?
Anyway, I agree it is possible that the entire storj saga will end with the company having developed an extremely flexible way to scale bare-metal storage and dynamically source it from multiple vendors. The entire SNO/distributed/crypto complications might then be phased out.
I believe Storj earns a bit more margin from Select: the commercial operators accept a slightly lower rate (for not having the /24 rule applied, so they fill faster), while customers pay a slight premium for SOC 2. If so, then it seems like everyone gets what they want.
I can’t see Storj replacing SNOs: part of their secret-sauce is their diversity of nodes that offer the performance that comes with massive parallelism… for customers anywhere in the world. They can’t duplicate the diversity of the current 28000+ nodes by running their own gear in a handful of datacenters. If they just run things in a couple places… they’re just another iDrive.
I don’t see why wouldn’t you be able to match these parameters if needed. Storj might actually have better leverage at tuning these parameters themselves if they were in control of storage, as opposed to attracting node operators, where they basically have to accept what comes.
Depends on the customers requirements. Sometimes it’s faster for some niche usage patterns than even Storj Global with geofence. If remove the geofence I wouldn’t be so sure, that it’s possible to overperform 28k nodes across the world.
What, why? This might be a language barrier thing, but surely it’s in StorJ interest to get paid for a product they’re not actually delivering. That’s getting paid for not doing anything.
Over time, having customers who are not getting what they pay for is a net negative for both parties, but if a customer is willing to pay for more, and in turn get the security that their storage allocation truly is theirs - then it’s all good
Yeah I wouldn’t assume “guaranteed usage” == “paid for the entire space from the first day”. You can write one bit to a drive, and say it’s “used”. Even just not having the /24 rule is a pretty attractive bonus by itself.