the files present on temp and trash folder under storage could be deleted ?
Someone are really old about 2-3 years ago
This was already discussed: How do I empty the trash?
ok this is related to trash folder but what about temp folder ?
as you can see some files are November:
-rw-------. 1 root root 4194304 Nov 11 07:32 blob-998695253.partial
-rw-------. 1 root root 4194304 Nov 10 11:52 blob-998871347.partial
-rw------- 1 root root 12544 Apr 2 00:06 blob-999067344.partial
-rw-------. 1 root root 4194304 Nov 5 2022 blob-999069716.partial
-rw-------. 1 root root 4194304 Nov 5 2022 blob-999175765.partial
-rw------- 1 root root 4194304 Mar 28 23:49 blob-99919773.partial
-rw-------. 1 root root 4194304 Nov 4 2022 blob-999208086.partial
-rw------- 1 root root 4194304 Apr 2 00:07 blob-999338264.partial
-rw-------. 1 root root 4194304 Nov 10 11:55 blob-999343786.partial
-rw-------. 1 root root 4194304 Nov 11 09:20 blob-999405743.partial
-rw-------. 1 root root 4194304 Nov 10 12:00 blob-999503634.partial
-rw------- 1 root root 4194304 Apr 2 00:07 blob-999762641.partial
-rw------- 1 root root 4194304 Apr 2 07:05 blob-999925502.partial
Temp folder was disqused too, recently. In short, you can stop the node and delete everything in temp folder, than start the node.
Stoping the node will clean up the needed files used by node, and what remains are lost files/pieces. For safety, you could delete everything in temp folder older than 3 days.
Don’t touch the trash folder. You could get disqualified.
I would add that cost/reward of this activity is firmly around zero: there is no reason to spend time analyzing storagenode internals.
If there was a bug that caused terabytes of data to leak — it would have been fixed and mess cleaned up.
Otherwise - those partial pieces represent negligible amount of space. It’s a 100% waste of human time to research, post, discuss, and delete them.
Node works? Let it be.
I kinda agree but also disagree with your statement.
For smaller ones it is fine to leave them there but if they are the size of 10GB+ its worth deleting them as it frees up space for new ingress and thus resulting in potential egress traffic.
But deleting the trash is not the best option as it is used for Storj to bring data back if they screwed up.
Storj pays $1.50 / TB, so you would miss out on 1.5 cents for 10GB wasted space. And that’s only, if your node is full.
If you don’t investigate, then you don’t know how much space you are wasting and if your node is working as it should:
Few things here:
As I said above, my job as as storage node operator ends at providing compute, storage, and network connectivity to storagenode. I have notifications configured to monitor that in case anything goes down for more than few hours. So, my side is working as it should.
Anything that node is doing internally is not my concern. This is what storj development team is responsible for. If storagenode leaks a lot of storage space that affects storj service, enough to prioritize fixing it over other tasks — development team will address that. I trust storj development team to ensure node works as it should.
And lastly, remember, it’s a commercial interaction, not social. If it was a volunteer project, I would not mind donating extra time debugging the node. But since we’re paid to host a node, this replaces social motivation with commercial: and now, if let’s say 1TB of space is leaked, it’s $1.5/month. I can definitely come up with better use for my time than digging in internals to gain that bucks and a half.
So, from just the commercial perspective, once I made resources available to the node, my job is done. If the node does not work efficiently, the compensation would make even smaller financial sense to continue, I’ll shut them. Today it barely pays for the electricity for the drives. So, it’s up to dev team to prioritize work, and so far they do an excellent work on that front.
From the social perspective, I like the project, and want to contribute. If there was a setting “disable compensation, I want to donate resources, don’t send me payments” I would happily check it, and would become more involved in the project, and feel better about doing so. I might even run performance analysis on my freebsd system and other experiments. Just because it would be pure volunteer contribution — in the social context, not commercial.
TLDR: $1.5/months is not an adequate compensation for lifting a finger.
Financial compensation is not my point. It is totally up to you how you see it. What I am saying is that if you don’t look you won’t see. And for me it was worth it to check it and to find out that there is almost 500GB of useless junk in that folder that I got rid of. Otherwise it might have accumulated even more and now when I am moving nodes, I have to move like 500GB less of data. Also this tells that the software under certain circumstances obviously does not work as expected and that the junk that remains in that folder should get deleted from time to time.
I have to agree, however, the feedback is important too - we could introduce a bug and did not catch it. So the Community is helping much! Thank you!
Please, keep posting about your findings.
Please take a look on
Do you have UPS? Does your storage node crash for some (low ram?) reasons? Do you restart the stroagenode often?
Files left behind because the node was brutally murdered does not count as “does not work as expected”
500GB? With a G? I don’t think it’s possible unless you reboot your node daily…
I’ve just checked my 10-month old node:
% du -hd 0 /mnt/pool1/storagenode/temp 223M /mnt/pool1/storagenode/temp
Looking at the list of files, they track the power outages, when the ups daemon force-ended the storagenode process as part of jail shutdown.
On topic, the storj team could probably include the periodic cleaner task to get rid of known-useless files (old temp, abandoned partial uploads, etc), as part of container; as not all hosts shutdown gracefully or are appropriate hardware in the first place, and this sort of trash will keep accumulating.
(I’ll do that in my freebsd setup script for the community benefit shortly). But again – priorities. I’d rather they spend time on something more useful than hunting down and deleting inconsequential amounts of trash.
In addition, I’m sure there is some amount of trash in the main blobs folder if the node was shutdown when the blob was already moved but not reported as successful.
I’m actually not sure if storagenode supports graceful stop – to stop accepting new uploads, finish current ones, and clean up after itself, before quitting. On some OSes temporary files can be opened with auto-deletion on close – perhaps this mechanism shall be used here as well.
It seems you have not read the link I have already posted:
So yes that was 446 GB of data and I have no idea how that happened. I did not have issues with that node and I did not restart it often. That’s why I am saying I am glad I checked.
I have yet to check other of my nodes but it certainly will be interesting what they will have left in their temp folders.
That would be a really good thing to do. One less a SNO might have to take care of.