Being in temp
I am doing now for i in *;do rm "$i";done
. This seems to be doing something at least no complaints so far.
find temp/ -mtime +3 -type f -delete
Make sure youāre in the right folder. This will delete all files in temp/
older than 3 days.
Yeah thatās so scary when doing deletes. It is so easy to mess everything up beyond repair.
Ok the thing is empty now. And node is starting up fine.
I have no idea why it got so big.
Then itās best to work with absolute paths like /mnt/storagenode/node/storage/temp/
. And also be careful with using variables in scripts. I once had some software deleting everything starting from / because after an update the variable with the temp folder was empty. Lesson learned the hard wayā¦
In Windows, you just point and click. Itās hard to mess things up, if you also enabled Recycle Bin.
GUI Power!
⦠Stop throwing rocks at me
wow thats big, mine was nearly empty even before manual deletion.
~100MB per node x 20.000 nodes = big eyes
its not that hard
I have a temp of 12g. Do I need to manually clean it?
It does not delete automatically so yes you have to delete it manually. Just make sure your node is stopped before you do it.
What whould be the cause of this? Why temp files arenāt deleted automaticaly? I believe this is not the normal behavior of storagenode software. Am I right?
damn! big job for big node operators with hundreds of nodes.
when you stop the node, or some error, file was written partly, and nothing will delete it. thatās all.
Abrupt node shutdown due to hardware issue or power outage or any reason the file couldnāt be copied to its right place would lead to such files being left in temp.
As suggested above it should be deleted automatically but its not a priority as of now.
Yes
So, to sumarise what I understand:
- when a piece of a file is uploaded to my node, it is buffered in RAM accordingly with space set for buffering (filestore buffer 4MiBā¦);
- then the 4 MiBs are moved from RAM to HDD in temp folder;
- then from those 4 MiBs, the exact piece is exctracted with itās real capacity (2-3-4MiB etc) and it is moved to final storage folder, where is audited and downloaded at request.
Is this correct? Or I donāt understand anything from storage mecanisms
And as an addendum to previous responses, I get that finding older files in temp folder, means bad node behavior (problems, outages, etc), and they canāt be encountered on healty normal nodes, with UPSes and good harware.
I found one node with 200gb of temp files. I have 20tb of hdd and 19tb of space for storj⦠before disaster I suggest to add this survival manual activity in docs
Thanks
When the node starts there is a process that empties the trash.
Wouldnāt that be the perfect time to empty temp as well? (Or first move to trash and let the existing process handle the deletion).
I think that would be an unnecessary overhead. Adding temp
folder to the emptying trash
process seems like a better idea.
I cleaned temp folder on all my nodes. From what Iāve seen, when you stop the node from cli, all the files form today are gone, so what remains are lost files that must be deleted. I believe you donāt need to filter anything. Just stop the node, wait 1 min, and delete everything in the temp folder. But maybe Iām wrong, and there could be files from 1-3 days old that should not be deletedā¦