btw how long does the vetting process take for 2 TB? just approx
Vetting is extremely fast atm. The node I started yesterday is already vetted at US1 satellite.
It should take at least a month.
This doesnât sound right, shared with the team.
They seem to be aware already:
BTW we had it already months ago that vetting seems to be completed too fast and even made some suggestions what could be implemented:
As you can see, many Community suggestions were implemented this way or another, so please keep doing it.
oh well⌠mine was Vetted us1.storj.io yesterday after bed time, and uptime so far is 18 hours, if we subtract time after midnight that is ~8 hours,
agreed that this is very fast, and yes to fast maybe
one metric i would like see added is quality of the connection, uptime is one thing, speed is another and good old latency etcâŚ
@Alexey i forgot where it was mentioned, but why is it that we have a âtrashâ part of the disk space used? why not just delete it straight away
To protect the network from the accidental deletion due to a bug. If we detect a bug, we can restore pieces and no customers data is lost.
Because the lost data = the lost customer = the lost reputation = the network is lost.
There is a 7 days delay before it will permanently gone. We used it already to avoid the data loss.
and if the customer Fâs up then they can also get the file back
Thanks for the explain, makes perfect sense now, before it was just one in the eye for me
They cannot. This is not a user-facing feature (yet), itâs only for the network safety at the moment.
So if the user deletes a file it goes out instantly and not in trash?
Trash is unpaid btw. ![]()
if user deleted something, it is gone for user.
may be it can recover it by support ticket.
system hold it in trash for 7 days, for own safety.
i know, i get that trash is keept if Storj delete things by mistake,
but if the user does it and
then why keep it in trash?
that implies that if user deletes it should not end up in trash, unless they make it a feature then at that point it should of course be in trash, but then storj can also ask payment to keep it in trash, and a huge maybe that could trickle down to the nodes
This is how the deletion is implemented. Earlier it was direct deletion - very slow for the customer, so we decided to just mark it for deletion for the later garbage collection chore.
And we implemented the trash feature, because the garbage collection uses Bloom filters to collect it, this is a probabilistic model, so there is a chance if we introduce a bug, the system may generate a zero Bloom filter, which will remove all data, not only deleted. The network must have this protection. As I said earlier - we already used this feature several times and this helped the network to survive, and not only in the situation with an incorrect Bloom filter. So, it was a right move anyway.
As a side effect we have an ability to manually restore the accidentally deleted data by the user (limited to 7 days maximum, usually less). But since itâs not fully automated yet, we do not promote this feature. Right now itâs only for customers with a contract. And still no guarantee that it can be restored due to operators who decided to delete the trash (those ones likely will be disqualified due to failed audits, but this will not help the customer). With a hashstore backend it would be very hard to delete it manually though.
There are hidden settings for hashstore:
ExpiresDays uint64 `help:"number of days to keep trash records around" default:"7" hidden:"true"`
DeleteTrashImmediately bool `help:"if set, deletes all trash immediately instead of after the ttl" default:"false" hidden:"true"`
The result will be the same, I guess - disqualification when we would need to restore something. There is a difference between using an option and delete existing pieces manually before this option would be used.
I also not sure that this option actually works, we have had the similar option for piecestore but it never worked.
BTW⌠what happens when it says overused ?
node ID: 14VL5gehjW4KB314meez4UKRQVh8v79Migd8VTp4jqX2s7efLA
You may check it on the multinode dashboard, if it shows less or equal to 5GB of free space, the node is full, so no new ingress.