By the way payouts only for used space without payments for egress will likely incentive to build huge setup (of course with /24 circumvent) with bare availability only to pass audits, something like several kbps for egress.
This will not improve the speed of egress from the network, in which case we will be forced to increase the erasure factor, so the expansion factor will also increase, and we will most likely end up with a negative balance at the end.
By the way payouts only for used space without payments for egress will likely incentive to build huge setup (of course with /24 circumvent) with bare availability only to pass audits, something like several kbps for egress.
It is probably even dangerous, because users will likely not understand the effect of less pieces and only see the money they can save. But they will loudly complain in the social media space when they have lost a file as a cause of their decisions and blame Storj. This will give Storj a bad name.
They suggests to spread pieces not between 80 nodes but all 22k nodes, not less.
He suggested to reduce redundancy and to let the customer decide how much safety he wants:
Oh I see. I mean the initial idea, not this one, sorry. I wouldn’t allow to lower the expansion factor unless it’s really safe, and that should be modeled correctly.
Reducing redundancy by optimizing Reed Solomon settings is very fine.
However it must never get lower than the required numbers to not to losing a file. I would say one single file lost and Storj is dead.
And if there is a bottom limit that is required then you can do this limit for all customers and there is no need to allow clients a setting below that.
So Storj would have to set an absolute bottom limit.
But of course, if a customer wants more for whatever reason, why not allow that and charge the customer nicely for it?
If it would be combined with regional options, so that customer can specify something like specific geographic regions where he needs additional pieces for better availability or general better worldwide spreading, I think that would be a cool feature.
That would be a NEAT feature, AND what’s more, it solves the bottleneck for popular files, scaling. 1GB or 10GB or even 100GB in 22000 pieces is relatively CHEAP, and the availability it offers is simply UNMATCHED, take that Big Cloud!
A new BIG gun in marketing arsenal for STORJ!
1GB would be 0.001024TB * $1308 = $1,339392 per mo to store a 1GB file on 22000 nodes.
10GB would be 0.01024TB * $1308 = $13,39392 per mo to store a 10GB file on 22000 nodes.
100GB would be 0.1024TB * $1308 = $133,9392 per mo to store a 100GB file on 22000 nodes.
That’s with $1,5/TB/mo/storage for SNOs currently
But imagine this combined with new price for egress: $2,5 (instead of $7 currently)
with model where SNOs would be paid $2,5/TB/mo/storage
then it would be like this:
1GB would be 0.001024TB * $2181 = $2,23 per mo to store a 1GB file on 22000 nodes.
10GB would be 0.01024TB * $2181 = $23,33 per mo to store a 10GB file on 22000 nodes.
100GB would be 0.1024TB * $2181 = $233,33 per mo to store a 100GB file on 22000 nodes.
Still small price
BUT an egress for customers would be $2,5 not $7!
SO more likely they would like to USE STORJ service!
(because You upload files to make use of them, and egress price is kind of a wall, preventing every customer from using it, because it hurts the pocket!)
BUT, if egress for customers drops from $7 to $2,5, and overall nominal cost to customers stays the same ($8,5 + $2,5 = $11), Your profit nominally the same ($2 from every 1TB egress),
it can trigger a minimum of 2 times more egress thanks to that drop,
and You, STORJ inc. suddenly earns minimum 2 times more than currently!
Mayby ask customers:
if they want to pay $4 + $7 or $8,5 + $2,5?
I personally? Would like to pay more for storage of a video, to be able to pay 2,8 times LESS for its distribution.
Because that’s what it is for, why i’m uploading a Video to a hosting!
for it to be downloaded in mass and watched!
So i will use that hosting, what take less for distribution!
And combine that with UNMATCHED STORJ advantages like: zero knowledge, world wide distribution on all nodes, it becomes a NO BRAINER really to choose STORJ over anything else!
For everyone who wants the Video to be watched smoothly, no matter the place in the world!
I’m telling You!
No other held amount, other than always last 2 months, even after 15 months.
And You didn’t reply me on how much it currently cost if 20TB node quits without “graceful exit”.
well couldn’t it be done by still making a file to 80 pieces, but for default level, just to upload 60 to nodes? it still need only 29 pieces to complete a file, but You don’t have to mess with the Reed Solomon, And STORJ would finally stop LOSING money on storage. And start profiting 25% on every 1TB stored.
Well probbably? im telling You, its a wholesome solution.
Additionally You have to make sure for example: that 8 nodes in same computer in same apartment (/24 circumvent) doesn’t get more than 1 piece of a given file. (if 80 pieces total)
And how would You now know, which nodes are in one /24 circumvent?
if they all hide behind VPNs?
You would need to drop the incentive to do so.
Why people gaming the system and making many nodes under one location against ToS?
Because the payment for say 20TB full node is currently $30 + some ridiculous little money from egress $5/TB so mayby 1 TB out of 20TB egress a month there is, its $35/mo, for a 20TB hdd constant in use, if it’s not some Ultrastar, it will likely die in 2-5 years under such load. Who wants to condemn their equipment to certain death for $35 a month, minus cost of growing electricity prices, minus cost of growing inflation? minus cost of watching over this, so it works 24/7/365 non stop?
Cheapest good quality 20TB HDD is like $300 used. its like 15-24 months to just cover the costs, and if You want this SNO to stay an operator, then he have to have money to replace the HDD in case of failure, otherwise he would lose TIME and EFFORT on this whole STORJ operation, if after 2 years he end up with NO 20TB HDD, and NO money for even same used one. So whats the point to be a SNO?
And that’s assuming the node is full of 20TB data from the start, and we know, that filling the HDD from 0 will take 2 to 3+ years! so there’s around minimum 2 years of wearing off the HDD for STORJ when the node just starting to earn optimally, and now, something happens with the HDD, what’s now!?
Wouldn’t been he better off, if he would just keep that HDD for him self occasionally used?
So You see, 1 node even with big HDD 20TB, isn’t worth of a keeping computer 24/7/365, its just not effective with current SNOs rate. Because 90 Watt computer for 24/7/365 is like $197/year alone, if 1kWh is $0,25 total. (in Poland is currently, and $0,50/1kWh after 2000 kWh limit/ restarts yearly, and that computer is 788 kWh/year) Whatever, make it raspberry pi with i don’t know mayby 15 watt total, and 1 hdd, that’s 131,33 kWh/year, $32,83/year for power, but raspberry pi’s don’t grow on trees for free. I never had one, but i have normal PC’s and they eat like 60-100 watt idle. And people in large %, have no reason to run a PC 24/7/365 other than STORJ. I run BIG, 170 watt PC with 14 HDDs, only for STORJ. if those 14 drives will fill i will be sitting on around 150TB of data. That’s $225 a month. You think paying a man to secure a 150TB of Your data is enough? yes or no, leave a comment. (How much it will cost if 150 TB will go without gracefull? just asking for a friend) Don’t look at me, im telling as it is, people BUY used HDDs for STORJ only, because don’t expect otherwise. You pay, so incentive appears. It’s either this, PC dedicated for STORJ 24/7/365, or no PC, and no node. People with already use case for PC in home to work 24/7/365 are too small %, and they already busy with what they doing, to look at some spare change for the effort with starting with STORJ and learning how to operate it. Because they won’t let go only 1 or few private HDDs they have in the NAS to be totaled, for pennys. And i earned it long time ago, i’m already payed off for few years up front even with this pc cost me total of 2500 usd, i bough it from STORJ money, its full of enterprise grade HDDs, i earned from first months of surges. So im forever gratefull and i swore to myself that my PC would serve STORJ faithfully till the end, my or STORJ’s. Anyway:
So people have to have few HDDs in the computer so it makes sense for the electricity alone.
If someone have 2 HDDs then problem with 24/limit occurs. What to do?
2nd node have to have different ip now, or both nodes will slow in data filling 2 times now!
If You have 3 HDD’s, then 3 ip’s, or data will come 3 times slower etc. etc.
So without gaming the 24/ rule, there is no hope for return to just cover the cost even in 2 years.
So the incentive to break the ToS 24 rule is too slow data filling.
The 20TB node has to fill in 1-2 years even with 5 or 10 nodes in same 24/ circumvent.
That’s the bare minimum. You don’t see yet an huge exodus, because it’s been just above 1 month from when new rates took place. But in long term, above i showed You the costs. Not many will want to join for such rates to the network as a node in long term. And if they join, they will realize it after some time, resulting in ungraceful exit. (again, how much that costs?) And if USD drops in value globally, then it will be even worst.
Either You make data flow fast, or You rise SNOs payout for storage from $1,5 to $2,5, with current data inflow for bare minimum.
And this combined with $2,5/TB/mo storage rate, vs $1,5 currently, will make node operation sustainable.
If 20TB will fill in 1-2 years or faster, then paying additional cost of a VPN will be unnecessary.
No other option, You have to remove the REASON people game the system.
And nodes will drop VPN’s subscriptions, revealing true ip, and then it will be visible for STORJ.
So just with current implementation, the same piece will not fall into same 24/ circumvent, if i remember correctly.
Fortunately, You CAN rise it, AND simultaneously make STORJ service more attractive for customers!
At the same time securing STORJ future to ever expand,
by securing nodes work.
And making sure Storj inc. earns both from storage and from egress!
So paying $2,5 for storage to SNOs sorts:
- 24 / limit problem,
- Held amount for too long problem,
- Sudden quitting problem
- Small TB nodes profitable problem.
- Nodes centralization problem
- Too low customers usage problem.
- Slow data inflow problem (faster HDDs data filing)
- Storj income problem
And You don’t have to think anymore about making 60 pieces instead of 80 as default.
The price for storage would be $8,5/TB/mo for customers with current default 80/29 redundancy. And STORJ inc. will profit 25-15% from every 1TB/mo stored, not losing like now.
“with bare availability only to pass audits, something like several kbps for egress.”
Making a Storj node, one agree to given requirements:
One processor core Minimum 550GB of available disk space Minimum of 2TB of available bandwidth a month Minimum upstream bandwidth of 5 Mbps Minimum download bandwidth of 25 Mbps Keep your node online 24/7
5 Mbps that gives max 1,5TB of uploaded data in a month.
if that’s the requirements, then the node has to be audited also for this 5 Mbps parameter, every day.
if that’s needed, in order to secure STORJ profitable future for everyone, then isn’t it WORTH doing so?
It could be zero. Depends on how many healthy pieces for the segments, the repair job got triggered when the number of healthy pieces is below the configured threshold (56 at the moment). Your node contains only one piece from 80 for the segment.
Otherwise it will cost as described in the post above.
May be. But this should be well modeled. The current model is 80/29. Lowing redundancy without proper investigation is too risky, especially if account nodes who trying to bypass /24 limits.
Right now 60 is too close to the minimum health threshold of 56 nodes, so we likely will be forced to repair even a new data much earlier, than with 80 pieces, which may never happen otherwise (the customer may delete their data way before the segment reach a threshold for repair).
I do not know. But I would like to read ideas on that topic.
There are usually at least two solutions - make it economically not viable, or apply a technical solution. Lowering prices may be not a good way to solve it, but VPN is not free, so less profit but with more headache shall reduce the number of such nodes.
The technical solution could be
To share an unused space and bandwidth on already online and paid hardware, where is literally any income is a pure profit and nice discount to existing bills. The whole idea is to do not do any investments in a first place. So, if you decided to invest - it’s your responsibility now.
Unfortunately it will not prevent from breaking /24 subnet limit, it’s likely reverse: you are paid more, if you store more. And since egress is not paid, there is no incentive to have a good upstream bandwidth, you technically would need only good downstream bandwidth to allow customers to fill your drives.
Combined with the customers desire to use a free egress it will end in conflict: they would not be able to use this free egress, because backing nodes will throttle the egress, so they could use our cloud as a cold backup only. Low revenue without a paid egress with high risk to lose data if we also implement your suggested lower redundancy.
Removing /24 subnet limit will make the situation even worse - now you will have several pieces of the same segment in the same physical location. So, if your hardware is off for any reason, the whole segment is in danger. Thus probability to lost files and then customers will become a much higher.
no, it doesn’t, more like it will be exploited even more often. We saw this in V2, so it likely will repeat.
Maybe I didn’t get it, how do you want to prevent placing more than a one piece of the same segment to the same physical location.
If we would use the current node selector - one node from /24 subnet, then this is exactly the limit which we have now.
Sure You can’t have it working, without answering to a question at the end in conclusion positively. i’m disappointed You replied to segments of my writings, where further in my text, things are explained, and comes together. Your answer does not take whole post into account together. It is a mish mash in Your answer, of option i mentioned, but out of contexts. Some things meant to work with others only, some could work without others, mentioned. Don’t mix them without context please. I stand only for what i write in the post before this, in that particular order as i wrote there. So won’t repeat too much here, because its all there. I will just add to clear this :
Yes first, lets say 1 apartment have 8 nodes in 1 computer. Every 8 node has its own VPN, or proxy, resulting in different IP, same apartment, same internet connection. If the SNO take off the VPNs, all nodes fallback to 1, original same home ip, right? That’s it.
You have to eliminate the reason, why SNO put those VPNs there in first place. He take VPN off, and he falls under 24 rule, BUT its not a problem anymore! That’s the point. STORJ knows, not to put more than 1 piece of 80 to that 8 nodes. And the nodes owner is fine with it. For node owner to be fine with it, what have to be done, is in my previous post. Just understand why the SNO does it at first place, and eliminate the need for that. Let me explain further please:
There is no such thing as majority of SNOs who runs, and i quote:
Sure it is nice, BUT i highly doubt, that this is the backbone of STORJ.
Because this is against logic, of incentivization: If You offer a payout for making a STORJ node, more people will just create a node from nothing, than use already working 24/7/365 computer, because most people interested in earning penny’s don’t have a 24/7/365 working computer, because it COSTS electricity, and they are clearly in need for money in first place if they are interested to host STORJ node like i was. Because who has a server in home running 24/7/365? Mayby later add something to that pc by the way, like home server for something, but that’s by the way of STORJ node.
i’m just saying You can’t rely building the network out of people who already have a server in the house, because who had it, before STORJ. We had miners with graphic cards, but not with HDDs. And now, You won’t put a computer 24/7/365 for STORj with 1 or 2 HDD to make economical sense. Because You will loose TIME, EFFORT and MONEY. You have to think things out, prepare, and invest in used HDDs at most, because most don’t have spare HDD’s to be destroyed for pennys, keeping in mind that they go to losses in 2-5 years. So You have to earn at least for the next ones. Just like i explained in post before this, You have to calculate how to do it, so You will be in profit, and NOT just destroy Your private HDD for some pennys in exchange. And back 3,2 or 1 year ago STORJ wasn’t so much pennys, i deeply regret i wasn’t been able to put 150TB in place from the beginning, i had to wait to sell some STORJ at $2,24 somewhere in NOV 2021, and had opportunity and time to buy used HDD’s cheap only in this year. I don’t neglect, i saw people got servers and added STORJ node later, sure, not a majority. Based on my understanding. You can do a mail poll, You can ask Your self, if SNOs started keeping a PC 24/7/365 because You called for action, and they wanted to earn, or else, just don’t expect honest answers, they will assume, You are cooking something against them for bypassing /24 rule
Also very quick:
if You just would like to change 80/29 to 60/20 for default so STORJ stop losing money from now one and earn on storage (currently something about 20 000 usd to be earned monthly if implemented, then also the treshhold 56 need to be addjusted accordingly. Probbaly to something like 42. (30% like now, or more like 45). But thats woul be just an short term move. Much better is to reform whole system, mayby more or less like i mentioned.
Also it isn’t set in stone. CAN be twicked.
That’s why I’m publishing it here. to be discussed.
Don’t have to be 0 for egress, if STORJ inc. will have profit of $2,5/TB
You can show SNOs Your generosity and share some later.
But to have whats vital for me as a SNO, and for the network survival, $2,5 for storage,
I’d rather forgo the egress and repair payment if necessary.
it is either more or less this, or nodes decline to death with current pay rate.
You don’t really think You can expand, or even keep the network, paying nodes a $1,5/TB/mo storage? and $5/TB upload, but the upload is non existing?
With the same time, asking $7/TB egress from customers, where wholesale industry prices are close to $1-2?
(hetzner cloud up to 20TB free, later €1.19 per TB.)
Sure its aple to oranges, but customers compare price to price and that’s it.
All im asking is don’t wait till crisis, start preparing a reform now.
Sorry, but it’s not needed to quote the whole post, otherwise it would be not readable. You always can click on arrow button to see the whole post.
You provided a lot of concerns regarding investments into hardware which I cannot suggest, support or promote, it’s also against our own suggestions to use what you have now.
For the earning purposes there will be even more incentive to try to break a /24 subnet limit (which is makes sure to place 1 of 80 pieces to these 8 nodes), because if they stop, they will receive in 8 times less data, and doesn’t matter if it could be in almost in 2 times more paid, it’s still will be in 4 times less than before.
I believe if we increase price for storage and remove egress, this not altruistic actor will not stop using VPN, because now it will get in almost 2x times more, than before.
this should be properly modeled, if this proportion would work, it could be implemented.
I still think that without a proper incentive to do not throttle egress it what will happen. If you are not paid for egress the byzantine actor will likely throttle it by selecting a more cheap ISP plan (as you suggesting to make investments, then everything should be considered).
Yes that’s true, BUT think, what effect it would have on the network, it all won’t happen instantaneously. First, if You will be able to lower egress cost for customers from $7TB to $2,5. I foresee the usage of STORJ will grow and data inflow can increase only God’s knows how much, but i think there is no other option, as to arrange these prices better for the needs of SNOs, Customers, and STORJ inc. simultaneously, and find out!
Only then, depending on how much that data inflow will increase, the process of dropping VPNs by SNOs can start. But it’s a process, during such, there always be SOME with VPNs, still gaming the rule /24! BUT if the increase of data flow will be big enough, so 20TB HDD, can fill in reasonable short time, then VPNs will be just unnecessary cost!
I’m talking here, like crazy fast!
like in 6-12 months, or even 3 months, then who needs VPNs to bypass /24 rule. VPN beside cost, are constant pain, it can disconnect by itself, or the app will freeze. They impose transfer limits too, may happen it will be faster to fill the HDD without VPN!
They also causing delays in latency even with downloads, and i don’t remember, is there a race for downloads(ingress) among nodes too?
We can rid off any byzantine actor willing to cut egress or internet plan,
- You are setting the requirements for upload and download for given size of a node.
- You implement measures to check if node is keeping the agreement.
- You can suspend and disqualified a node, if not so.
That’s egress. And internet plan: SNO might be better off to have higher internet plan, in order to fill the HDD faster to get paid better. Were talking about scenario where STORJ grows in terms 20TB in 3-6 months per SNO average, So SNO could freely add HDDs to existing nodes, without need to bypass /24 rule, because the filling will not be any faster with VPN on each HDD. Talking about benefits in filling HDDs from having better download speed, without VPN, surpassing benefits from having nodes on VPN, getting same pieces on each node, but in the end slower. Also VPN’s could not withstand so much traffic for the price, recently Mullvad VPN closed port forwarding, officially because some abuse in content people hosted, but in practice? they got very good speeds of upload, and cost only $5/mo per 5 devices, unlimited bandwidth, i think traffic got them too. VPN’s usually has low upload speeds, so now a SNO even without making egress intentionally lower, is risking suspension or DQ if You implement egress speed audit daily.
Shouldn’t say that, because who knows, maybe You will implement that fast and i’m in trouble too soon… hopes nobody reads that!
i think I touched the heart of the matter… And a proverb comes to my mind, which clue is somehow like that: “People tend to do sensible things when all else fails.” My concerns are, You will do some things anyway, You don’t think off doing now, but could be little late, when nodes count will keep falling, and customers won’t be coming. I think, i foresees that. Don’t want to take Your time now, i’ll maybe write You back here if counter shows much less nodes. Thank you for your responses, which allowed me to clarify more details.
That’s the problematic part. The traffic flows directly between customers and nodes without any middleman (except when they using a Gateway MT). So you actually cannot check the speed. It also depends on a distance (and number of hops) between the node and the customer. The speed between your node and the customer from the next apartment likely will be higher than between your node and the customer on the other point of Earth.
The speed between the node and auditors doesn’t matter, unless we implement a distributed network of independent auditors (see Distribute audits across storagenodes). The distributed network of auditors could be used to get some believable statistics, which could be difficult to trick. However it’s difficult to implement.
Can’t You just make an account as customer, name it “auditor”, update satellites option, so they can select that one client account “auditor” and enable for him to upload to every node there is?
if only customer can measure, become one, who uploads to all nodes a file once. And just download it everyday, measuring the transfer speed. For example: a 5MB file, or whatever least size possible, to measure correctly the speed? At the same time, You will make possible for the satellites to enable my idea of storing a piece on all nodes
Sure, but they could do that with the gateway MT already. Which would already have a lot of actual traffic to monitor. The issue is not that they can’t create a single measuring entity, the issue is that that would only measure from a single source location, instead of from all around the world. They could of course create a flock of globally distributed “customers” or “auditors” to be a little bit more precise, but that’s quite a lot of overhead to manage. And a single source location just doesn’t provide enough information.
It could be done safely by increasing the total number of pieces in such a way that the reliability remains the same. I did some calculations a long time ago, but I don’t remember exactly. It was something like going from 29/80 to 60/120 that would give similar reliability. However, this has many different side effects as well. While it could speed up transfers for larger files, it will also lead to more connections and could increase overhead for smaller files. There could be an argument to also increase segment size to prevent pieces on nodes from getting even smaller than they already are, but that again has broad ranging impacts and limits the amount of parallel segment transfers larger files could use.
All of this said, I don’t think this should be a customer setting. If these changes are made, they should be made responsibly and by Storj Labs after doing the appropriate analysis. As for increasing the redundancy, this shouldn’t be necessary. Storj has already never lost a file and other aspects than node redundancy become the bigger risk anyway. (Storj Labs going out of business, satellite issues, coding mistakes causing catastrophic failure.) It doesn’t make sense to have the customer decide to increase only one aspect of redundancy, without them knowing the exact impact of that, while also leaving all those other aspects unchanged. In my opinion this would give a false sense of higher reliability and it would just be ripping the customers off and filling the pockets of node operators and Storj Labs. (On second thought… I’m a node operator, let’s do it! )
For example, a customer might think that doubling the number of pieces doubles the reliability (29/80 → 29/160), but in reality, this is like hundreds or thousands of times more reliable (didn’t feel like doing the math). The top post already shows this misconception by suggestion an outrageous amount of 1000 pieces for a segment or even storing pieces on all nodes. This would also require a lot more CPU to do the erasure coding btw. The upload would be extremely resource heavy and take a long time. At some point the question is, do you want to pay the normal amount and lose no files, or do you want to waist resources and money by paying ovey by paying over 10 times as much and still… lose no files.
Storj can’t afford the reputation hit of losing files. So since that’s already a necessity, let them balance it to make sure files are stored reliably and don’t offer customers a meaningless reliability upgrade just to squeeze more money out of them.
That’s a different story though. This could indeed really help and would be a much better reason to offer options. But I think this would really work best when the dynamic scaling suggested in the whitepaper is implemented. Peak demand is volatile and storing many pieces all the time is very wasteful. Besides, since this is inherently a high egress scenario, egress income could cover the cost of expansion (which would basically be the same process as repair, except to create new pieces instead of replacing lost ones) and the cost of additional storage temporarily.
That would do the opposite. You would go from an expansion factor of 80/29~=2.76 to 60/20=3. That would be more expensive and might even be less reliable. Maybe you meant 60/29, but that would definitely be less reliable, especially if you lower the repair threshold as well. Definitely a no go.
Yes, I believe 110 piece transfers are started and the slowest 30 are cut off.
Oh, hi Bright!
hah i like it!
lets have a fun!
Even if i just want make sure a fille will be archived as long STORJ network operates?
Example: Say i have old documents, i belive the network will live on, like bitcoin does, potentially forever, and i scan them and upload to STORJ, as my archive of choice. I want maximum possible assurance, that they will be there as long as network exists. I won’t be downloading it too often, but willing to pay more for good storage.
I’m private person, but say, i’m government, or institut. Fair use case for the settings?
a 100MB file would be like 74GB to upload to all 22000 nodes!
or imagine a developer of a leading program with new version,
or better, a game premiere, with 70GB download installer.
70GB file would be 51,8 TB to upload! ha ha haaaaaa…
My 300Mbps home connection, would complete that task in …
(37,5MB/s = 2250MB/min = 131,83GB/hour.
51,8TB = 53043GB,
so 53043GB / 131,83GB = 16,76
so ~ 17 days!
hahaa, but that’s option is definitely for professionals, not me, a home user.
But say, a few KB wallet.dat, i can upload easily.
i read that article 6.1 Alexey gave link to, and i don’t like that part:
" If a file’s demand starts to grow more than current
resources can serve, the Satellite has an opportunity to temporarily pause accesses if nec-
essary, increase the redundancy of the file over more storage nodes, and then continue
how that would look like?
is that would be reliable?
imagine a video is an sensation, and suddenly milion of people want to acces it, and its stored in 160 or 240 pieces, what’s then?
Will there be time for any pausing access, increasing redundancy (and to how much? how to know, to how much more nodes increase it?)
Therefore i think, if You are publishing website, or got new software release,
that it is wiser to give the file more redundancy up front.
Still You don’t know how much, but at least You can give more by default!
STORJ customers are no derps. This service is designated as an enterprise class.
i guess those people know what they are doing and why, when they doing it.
Maybe You should ask them, how much they are interested in such option.
i know i would expect it to be working by default, so i don’t have to think of any manual settings, in terms of CDN. But if STORj don’t have it? There should be some easy way for customer to enable the file to scale for thousands/millions of people, either after spotting a demand, or up front. What do You think?
Because without this, You can’t just invite more customers, and more traffic to the network.
Which is crucial for STORJ grow and survival, to do so, to invite people to host videos for mass audience, but the files have to scale somehow! Or those customers will be disappointed and angry.
Yeah, and look, i just think we need to better arrange these puzzles of price and benefits, for the current needs of all: SNOs, Customers and STORJ inc. at the same time.
I laid out my reasoning for changes.
No.1 instead $7/TB egress, making a $2,5/TB, could greatly move up usage for customers,
resulting: X times increase for video sharing (which is ~80% of internet’s traffic)
and other multimedia.
Changes resulting from my proposal:
STORJ inc. CHANGE:
- +$1,5/TB egress profit (from $1 to $2,5 for every TB traffic.)
- +25% to 15% from every TB stored profit (from -$0,13 to +$1 or +$1,6 for every TB stored.)
- 2 times more for storage BUT 2,8 times LESS cost of traffic
- nominally no change (from $4/TB stor. and $7/TB egress to $7,9-8,5/TB stor. and $2,5/TB egress)
- +1$ per TB storage (from $1,5/TB to $2,5/TB)
- BUT egress from $6/TB to $0/TB (or $X/TB if STORJ inc. want to share its pool of $2,5/TB)*
*The problem is the egress is non existing now.
i blame current rate for customers at $7/TB, where classic cloud offers as low as 1,19 euro/TB (with first 20TB free) like hetzner.com
In this situation SNOs, are condemn to slow death, with no egress “wind” and with too low payment overall. To repair this situation, SNOs could get flat rate for the node operation in total.
And maybe get some additional reward for any egress as $0,5/TB, or even $1,5/TB if, STORJ inc. is willing to keep it’s existing egress profit rate at $1/TB. The hope is, lowering customers price for traffic, would result in increased interest and traffic. That in time, would even surpass current SNOs earnings from egress, as if it was $6/TB, but with new rate.
Additionally SNOs, would at start, result in overall higher payout even with $0/TB for egress than currently with $6/TB egress.
And isn’t it what we face RIGHT now?
(beside no one said free egress for customers.)
Egress for SNOs is theoretically $6/TB but is almost not paid practically.
Last weeks the ingress are good, because nodes are leaving (was 26k is 21,7k)
I’m turning my node’s VPNs to WG right now, so they can win more races for ingress pieces.
Because my VPN in WireGuard mode has less upload, but fantastic download speed and latency.
Will have faster ingress, and low egress. Exactly what you feared.
i hope to fill my HDDs faster, offering lower egress availability at the same time.
Alexey! the situation You are describing is happening right NOW!
Very low egress in the network traffic - says it all, they are using it as cold storage now.
I’m calling to implement ASAP, a nodes upstream bandwidth audit, in case to make possible those crucial and vital for STORJ survival changes!
Next im calling to implement ASAP the piece scalling based on Whitepaper | 6.1 Hot files and content delivery (page 63)
IN ORDER to make much needed changes, more or less, as i was able to laid out, possible.
Alexey, can You make somehow text of this post visible for readers of " Update Proposal for Storage Node Operators - Open for Comments" thread in the Announcement?
You may post a link to your post there, or I can move your post there.
Yes, even then. It’s a balance, file reliability as a result of redundancy covers just that single risk factor. Let’s say that factor ensures 11 nines of reliability. Perhaps a coding mistake leading to data loss is 13 nines of reliability. Would it then make sense to go for a redundancy reliability of 20 nines? Not even a little, because that is no longer the issue of concern. I guess I could understand a small settings range where it would make sense, but 1000 or even all nodes. Hell no. At that point you’ve made that aspect of risk less likely to happen than a world ending apocalyptic event.
Not in the numbers discussed. See my response above.
First of all, it’s not just about the file size. Erasure coding uses CPU and RAM as well. Second, who has only 100MB of data to store? What if you wanted to store 100GB or 100TB. Even that is super small time for larger businesses.
Yeah, that would be a bad way to do it for sure. I would be okay with priority access for the job that creates more pieces, but blocking it is not a good way to do it. That’s a simple tweak though. The core concept is still solid.
I’m not opposed to an option to set higher availability from the start, but I would be opposed to marketing it as a higher reliability option. If it’s sold as a way to be able to serve large demand, I’m perfectly fine with it, but I still think even that should be dynamic in some way. So you don’t end up with customers setting a file to high demand and leaving all those pieces around while that demand is long gone.
I think it should not be put on the customer to make a determination of the amount of pieces. Storj may not have it now, but it can be built. Why not provide the customer with an option to set files to scale pieces on demand. Just a single switch, instead of forcing the customer to manage all the Reed Solomon settings. They don’t need that hassle.
Btw. I’ve stayed away from the pricing discussion here for a reason. I think all the suggestions have been made and there is not much left to say about it. I hope Storj Labs is considering the suggestions and we’ll see where they land. I’m more interested in the technical side.
If, only this scaling could act fast in real time. And have a sense of falling excess, after which it would slowly reduce the surplus of pieces, saving storage cost.