The next software upgrade includes a change that will by default send piece deletions to the trash rather than immediately removing them from disk. This is a temporary precaution to avoid failing audits for pieces that were deleted between satellite database backups if we have to restore from a previous backup. Each node still has the option to turn this configuration off, but we highly recommend you keep it on. It is very unlikely that we will have to restore from a satellite database backup (we have never had to restore from backup thus far), but in the event we do, this will be the only way for you to avoid audit failures and potential disqualification.
Again, we understand this is not an ideal solution but we will be implementing a better solution in the coming months.
You may check out the change here
Just a question - If a database backup was restored would the piece deletions be moved out of trash and back into the original storage location?
Then you have to pay (fo trash storage) for your bad ideas.
Why do you want to be paid twice for storage ?
Why not store the list of recent deletes in tardigrade? Then they can be re-applied after the backup is restored
Yes that is our short term solution while we work on a better solution.
That is what the better solution would do. Storage nodes just keep the delete message and if the satellite requests an audit for a piece that was deleted the storage node can proof it with a signed delete message. Problem right now is that we didn’t had the time to implement it.
Why twice? Deleted = not paid. Trash = free storage.
Don’t all databases support online/hot backups these days: do you need to bring the satellites down for them? (if the problem is that backups aren’t timely: isn’t the solution just to perform them more often?)
I have no problem with having a bit more trash hanging around: I think most of us still have tons of free space
Hot standby replicas are not backups, they are redundancy. There is a big difference.
Redundancy is for uptime – your hardware is malfunctioning or someone broke the primary config, so we switch to the replica.
Backups are for disaster recovery.
If a human accidentally runs a too-broad update/delete operation on the database, replication is going to happily ship the update to the standby and it won’t help you recover your mistake. Other forms of data loss could also be replicated to the standby replica.
So I guess I’m seeing the benefits of this new feature
As my node was only about 1.2TB that’s like 20% ish gone to Trash in the last 12 hours ha
Also this is now counting as used disk space still, but not being paid for ?
#Edit: only updated my docker to v1.30.xx at weekend, node was offline for about an hour while I updated Linux, and rebooted - sure it’s not linked, but now waiting to see if my audit drops from 100% >< maybe test data being removed ?
I currently have 21 Tb stored but trash is only 102GB. Maybe some of your pieces already got repaired during your one hour downtime and get deleted now on your node?
yeah that’s what I’m thinking, but it’s not usually that aggressive, although 90-95% of my ingress at the moment is repair, so who knows… I guess satellites really busy rebalancing and I’m just unlucky, that the time my node was down, I had lots of audits… I wonder if there is a way for me to see what satellite all the trash is from… hmm let me go and investigate.
#edit lol another 60+gb in 1h, at this rate I will be back to zero by tomorrow - I seriously hope my Linux update hasn’t broken something, was just a standard apk upgrade
#edit - logs look fine, just lots of
INFO piecedeleter delete piece sent to trash
Percentage of logs in past 24hrs for piecedeleter (some sats missing as my error has rounded up)
US1 - 67%
EU1 - 30%
AP1 - 3%
Hmm my trash is constant at 102 Gb…
Did you check your logs?
yeah just looking now, something not right with my node, going to shut down so I can take a look - will hopefully limit any more data exodus ha
#Edit - can’t see anything obvious, just did a lucky reboot and will see if that fixes it…
This is really curious, these lines are not garbage collection, but normal deletes, unless they changed the wording for garbage collection to be the same (sure hope not). But I’m not seeing any of these. Barely any deletes at all lately.
I was actually thinking to myself the other day that since the change I haven’t seen an increase in the amount of data in the trash. Pretty much the same levels as before, although I don’t track it so this is just and anecdotal observation. 1 GB / 3.55 TB and 500 MB / 1.74 TB.
Yeah there have been very few deletes recently.