Failed deletions? files use space forever?

I just happened to have vacuumed my db’s a few days ago after… I think for the first time since the db’s were split up in the first place. The size reduction was about 25%.

I guess we will have to settle for you then… xD
@mangoo i would also tend to agree with bright, i doubt…
hmmm i had a similar issue to this, my log showed a failed audit and on closer inspection found multiple downloads that failed due to database is locked, which seems to be a problem in v 1.3.3

because i found more people that recently posted about the same issue…

can’t say if this is the same thing just in a different way, but it could be…
maybe not… doesn’t have the usedserialdb error thing in the log line…

Those are very different numbers than what you posted just prior. It looks more like you vacuumed the database files twice in fairly quick succession and posted the results from the second run, which I would not expect to be significant at all.

Yes and no.

It’s related to the properties of each delete/insert/update query. Delete queries create garbage; insert queries might reduce the total amount of garbage depending on whether they are able to reuse garbage records. Update queries might create or reduce garbage.

The longer a node is running since the last vacuum, the more queries will have been run, so the trend over time will point upwards. It’s not exactly linear, but linear is not a bad approximation.

I don’t know that one’s specific hardware really plays into this at all; it’s not like SQLite uses different algorithms on SSDs vs HDDs, right?

Sure…

But the wall clock time has nothing whatsoever to do with how many queries the DB is getting.

I should also note that even I got the DB locked errors… at the very beginning of May… just before I vacuumed the DBs. Just one “ping” … errr… error only.

The difference between two wall clock times can indicate approximate usage, especially as the difference grows. If we’re talking about queries per minute, no… that’s too small of a window to make any sort of educated guess.

If we’re talking about months, then yes… I can fairly reasonably say that a node that hasn’t had its database vacuumed for 3 months is going to have quite a bit more garbage than one that was vacuumed a week ago.

Anyway, this discussion started due to a misunderstanding:

I read this as 288KB recovered after taking 6 hours to complete a vacuum, not 6 hours between vacuums. My point was that taking your node offline for 6 hours to recover 288KB is pointless, but this is not what you meant. And thus your follow-up makes sense in that context. I was replying to a statement that was never made because I misread your post.

I believe we are (mostly) in agreement after all.

2 Likes

That never happens on the Internet.

:slight_smile:

2 Likes

if it was required i’m sure they would have fixed it by now… automatic database vacuuming
lovely now the network activity basically bottomed out, lets hope that somebody meeting at work pressing the pause tests button and checks that nothing critical is broken xD

Exactly. My logs are getting huge. 475 MB after just a couple of days. I am wondering if adding something like
--log-driver local --log-opt mode=non-blocking --log-opt max-size=100m --log-opt max-file=4
to the docker command would be fine.

1 Like

There’s lots of options for the log problem, at least on linux. you can use docker and use those options you mentioned or redirect the log file to your HDD and use logrotate. You’ll find on google how to do that.
No need for STORJ to implement anything for that.

On Windows idk but I read somewhere that there is some sort of logrotation too.

2 Likes

Well they would need to make sure it won’t break functionality, I don’t know what is there to rely on log information (like dashboard or something).
And at least they need to mention it in the setup instructions. This is a set up and forget approach. For that log control is really required as Docker default is not to care about logs at all.

Nothing is relying on the logs. Only if you have some errors you might want to check the logs yourself to find the problem. But the software doesn’t need the logs. (in fact, any software that relies on its own logs is a failure imho).

Yeah that might be an improvement to mention something about logrotate in the setup instructions. But i haven’t looked at those in a long time, no idea what they added/changed.

this problem is old, but in v1.3.3 it’s visible in the log.

been noticing these this morning, doubt it’s useful for anything, maybe we need a place to post weird logs that just sounds wrong… but it seem so right…

anyways, seemed relevant to dump it in here…

2020-05-07T08:43:18.716Z        DEBUG   retain  About to delete piece id        {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "RUNKXWNICY33GS4TX7EG4GA2LOBPOHN6WOP323YBYU55OIFKYWEQ", "Status": "enabled"}
2020-05-07T08:43:19.286Z        DEBUG   retain  About to delete piece id        {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "34EMKRUJWTBF7Q2KINZ2KUMHH6NBX6MFTS3KCMU52PCRDEOWB7FA", "Status": "enabled"}
2020-05-07T08:43:19.404Z        DEBUG   retain  About to delete piece id        {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "43DEBFJLROSWUUTNP3R2L4EC55TISSYIJWGKMCZ5M5A4WOK4TU7Q", "Status": "enabled"}
2020-05-07T08:43:19.507Z        DEBUG   retain  About to delete piece id        {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "43NGZNXMFMUSWQE5VOT7Z4GNJGGX6IED7JWIDZWK3RZKKXWKGHCQ", "Status": "enabled"}

Its just garbage collection process. Why do you have your log level set to DEBUG ?

what are logs for other than keeping track of whats going on…
i mean i can always sort debug out, i cannot turn them back on if i don’t log them…
so if i have a critical problem i wanted to solved, then i might have more trouble backtracing it to whatever changes may have caused it… if one can get a specific time on when something changed, then it’s a clue to help figure out what went wrong and if nothing else.

maybe a way to prevent it in the future.

hmm thats interesting, how long is it that the node actually keeps the “deleted” garbage… seems kinda quick that it already is showing up… so only 48hour?

Can you elaborate on that?

just wondering what the delay from a piece is logged / node receives a request for deletion before the node actually delete the actual piece… if you know

There is a timeout for deletion and if pieces aren’t deleted in that time, GC takes care of it.

2 Likes

i will assume GC means ground control or command… clever makes sense to have so people can look into the “file structure” / gears of the machine and see whats happening… deal with whatever the programming didn’t account for.

No Houston, its … :drum:

Garbage collection

1 Like