So we’re talking about being vetted on all but one satellite. I wouldn’t consider that a problem. You’ll get traffic from all the others when free space becomes available.
StorjLabs would eventually drop customer’s data in this case, after a long grace period during which accessing their data would be locked I guess.
But when a customer’s complete storage data gets deleted, I don’t believe it would free up that much data on your node, because their data is probably spread among hundreds if not thousands of nodes all over the world. So it wouldn’t leave your node with a sudden gap of 2TB of free space. A few GB maybe.
That’s my 2cts anyways
But not the full traffic. I think that being able to reserve some space (say 10GB total) only for satellites with <100 audits would be useful for smaller nodes. A new satellite pops up, the otherwise full node uses the reserved space to store some data for it, so it can get the 100 audits faster. If a customer deletes some data and there is free space on the node available, the node would get data from all satellites, not just the older ones.
That really does not matter. much. At one point it would get deleted. At what rate we don’t know and anyway SNO has no control over it. When data is gone, it is gone.
So my idea is to better be prepared than sorry. If you are vetted on all satellites at all times, you can only gain from that.
I have five nodes - four of which are full.
All five are vetted on the new satellite :
europe-north-1 2020-04-18 Paid 0.0002 USD 0.0005 USD 0.0007 USD 0.0009 USD
Status:OK (Audit score:1000) Held 0.0007 USD 0.0005 USD 0.0002 USD 0.0000 USD
So I’d say yes, full nodes get vetted for new satellites.
Old satellites are the “problem” :
stefan-benten 2020-03-22 Paid 0.0061 USD 0.0122 USD 0.0183 USD 0.0244 USD
Vetting:19% (Audit score:1000) Held 0.0183 USD 0.0122 USD 0.0061 USD 0.0000 USD
Thanks for stating this. I think the discussion has evolved a bit as it became appearant that a node keeps getting vetted despite being full however only if he has data from the corresponding satellite. If a satellite cannot put data on a node because it is full then no vetting can take place for this satellite.
So in continuation of my initial question, I think there should be ways to prevent blocking of vetting due to capacity some how, maybe like my suggestion here: Test Data on the Storj Network and Surge Payouts for Storage Node Operators
No. The new satellite was just an example. If Storj had added more satellites as they intend to do node would not get vetted on those as well. SNO can never tell when or how many satellites will show up suddenly. The smaller node is not yet vetted on any satellite. From the logs I can see there have been audits, but as the small node is full none from the DDS satellite.
Now I can see exactly what I have forseen: Data got deleted and while I can see ingress on the nodes that have been vetted already, there is none to replace the deleted data on the node that has not been vetted yet.
Also sudden disqualification or suspension can happen. As forseen as well, it just has happend to some.
As SNO can never tell when data will be deleted and how much will be deleted, to be properly prepared for such a situation would be to be vetted on as many satellites as possible. And therefore vetting should not be artificially halted when a node is full.
I stick to my suggestion to reserve a vetting space for some data if a node has not (yet) data or only few data from every available satellite. With such an implementation, vetting would continue even if a node is full which would be beneficial to SNOs.
I guess we just disagree. I don’t think it’s that much of a problem if vetting takes a few extra weeks on some satellites. You should be thinking about this long term anyway. I’m curious though, now that deletes are happening, what percentage of your shared space is now available? Is it filling up again from satellites that have already vetted your node?
The only concern I could see is that you’re not building up any held back amount on those satellites. Good for you, since you will get a larger percentage paid out when the satellite finally does give your node some business. But bad for when that would trigger repairs and those costs aren’t covered. None of that should be the SNOs concern though.
I have had a full knot for a long time. Despite today’s data deletion (now 31.7GB free), verification from the europe-nord satellite still does not start …
We currently believe the satellite is not fully operational yet.
Maybe @stefanbenten can shed some light on this.
It is fully operational and just currently not very busy. But as a SNO you should not expect to get a fixed amount of traffic from each satellite. Each satellite can and will operate totally different. This is not a synthetic network, where every part behaves the same.
In terms of features you mentioned before, I would highly appreciate them being put onto the Ideas Portal. Forum posts are typically not the best way of highlighting those wishes
That’s because there is still very little happening on that satellite. My node has only seen about 150MB if ingress so far this month, despite being vetted on that satellite. You’re not missing out on anything there. In fact, my node is vetted on all satellites, yet stored amount is going down atm. As soon as there is more action on that satellite, your node will get vetted quickly as well.
I thought that portal was going to be phased out in favor of the new feature requests area on the forum? DCS feature requests - voting - Storj Community Forum (official)
I might have miscommunicated, my main point was that just throwing such ideas into a thread, is not the best way to become heard.
Do you have a little bit more activity from that satellite as well since yesterday?
Vetting process for new satellites is far from being optimal.
In April, all my nodes except one got vetted in few hours after new satellite was deployed (= 100 audits per node in few hours). The remaining node was full and, of course, not getting any data nor audits.
After May data wiping, it started to receive data from new satellite but, since there are many TBs of data managed by this satellite in the network, it is very difficult to get audits to complete the vetting process.
Of course, it is an expected behaviour, but it seems odd to me.
Time basis limits are used for graceful exit and withholding model. Why cant it be used for vetting process?
In order for a node to get vetted on a particular Satellite, it must store data from that specific Satellite.
The process in which a Satellite verifies that the data stored on the node is still there, is called an Audit.
The Audit ensures that the data stored on the node has not been tampered with, and that it’s accessible on the Storj network.
If your node has received data from all of the Satellites available on the Storj network before it got full, then it will eventually get vetted on all of those Satellites. (the more data it stores from a particular Satellite, the higher the chance for the node to pass an Audit from that Satellite)
If your node is already full, and a new Satellite is added on the Storj network, it won’t receive data from that Satellite, and it won’t get vetted on that new Satellite, unless some of the older files are deleted from other Satellites so that space gets freed to make room for the data from the new Satellite.
Alternatively, if your node is full, but you haven’t allocated the full space of your hard disk yet, increasing the allocated space that Storj is allowed to use is also a way to make room for the data from the new Satellite.
Thanks for explaining the vetting process designed by Storj.
What is the motivation for increasing the difficulty/time of getting vetted with the amount of data managed from a Satellite?
There’s no motivation for that, it’s just the effect when you audit segments at random, if you hold a smaller percentage of the total, the chances of your node holding a segment that is being vetted is smaller as well.
Now I believe there is already an additional audit process that audits unvetted nodes first. But since auditing a segment involves an audit on all 80 nodes holding pieces for that segment as well. Even this still audits nodes with more data more often.
I will check how this works. thanks.
Found the doc for this.