Large temporary folder

Being in temp I am doing now for i in *;do rm "$i";done. This seems to be doing something at least no complaints so far.

1 Like

find temp/ -mtime +3 -type f -delete

Make sure youā€™re in the right folder. This will delete all files in temp/ older than 3 days.

2 Likes

Yeah thatā€™s so scary when doing deletes. It is so easy to mess everything up beyond repair.

1 Like

Ok the thing is empty now. And node is starting up fine. :slightly_smiling_face:

I have no idea why it got so big.

Then itā€™s best to work with absolute paths like /mnt/storagenode/node/storage/temp/. And also be careful with using variables in scripts. I once had some software deleting everything starting from / because after an update the variable with the temp folder was empty. Lesson learned the hard wayā€¦

In Windows, you just point and click. Itā€™s hard to mess things up, if you also enabled Recycle Bin.
GUI Power! :sunglasses:

ā€¦ Stop throwing rocks at me :rofl:

1 Like

wow thats big, mine was nearly empty even before manual deletion.
~100MB per node x 20.000 nodes = big eyes

its not that hard :wink:

I have a temp of 12g. Do I need to manually clean it?

It does not delete automatically so yes you have to delete it manually. Just make sure your node is stopped before you do it.

What whould be the cause of this? Why temp files arenā€™t deleted automaticaly? I believe this is not the normal behavior of storagenode software. Am I right?

2 Likes

damn! big job for big node operators with hundreds of nodes.

when you stop the node, or some error, file was written partly, and nothing will delete it. thatā€™s all.

Abrupt node shutdown due to hardware issue or power outage or any reason the file couldnā€™t be copied to its right place would lead to such files being left in temp.

As suggested above it should be deleted automatically but its not a priority as of now.

Yes

So, to sumarise what I understand:

  • when a piece of a file is uploaded to my node, it is buffered in RAM accordingly with space set for buffering (filestore buffer 4MiBā€¦);
  • then the 4 MiBs are moved from RAM to HDD in temp folder;
  • then from those 4 MiBs, the exact piece is exctracted with itā€™s real capacity (2-3-4MiB etc) and it is moved to final storage folder, where is audited and downloaded at request.
    Is this correct? Or I donā€™t understand anything from storage mecanisms :sweat_smile:

And as an addendum to previous responses, I get that finding older files in temp folder, means bad node behavior (problems, outages, etc), and they canā€™t be encountered on healty normal nodes, with UPSes and good harware.

I found one node with 200gb of temp files. I have 20tb of hdd and 19tb of space for storjā€¦ before disaster I suggest to add this survival manual activity in docs :sweat_smile:
Thanks

When the node starts there is a process that empties the trash.
Wouldnā€™t that be the perfect time to empty temp as well? (Or first move to trash and let the existing process handle the deletion).

3 Likes

I think that would be an unnecessary overhead. Adding temp folder to the emptying trash process seems like a better idea.

1 Like

I cleaned temp folder on all my nodes. From what Iā€™ve seen, when you stop the node from cli, all the files form today are gone, so what remains are lost files that must be deleted. I believe you donā€™t need to filter anything. Just stop the node, wait 1 min, and delete everything in the temp folder. But maybe Iā€™m wrong, and there could be files from 1-3 days old that should not be deletedā€¦

1 Like