Hi, the temp folder under the storage node is used as a placeholder for uploads - When your node accepts an upload, it creates a file under ‘temp’ to hold the data while it’s streamed to the node (each file will be no larger than 5MB) - When your node has successfully got the upload, and signalled that it has the file, it is relocated into the ‘blobs’ directory for storage.
An active node will have constant churn on the temp directory, but no files should be more than 30 minutes old in there - If you start to collect more than 2-3 temp files when listing the directory that is a sign that your hard drive can’t cope with the rate. There are some settings you can change to help with the file allocation for the temp files in config.yml file - you can pre-allocate bigger chunks, and you can limit you node on maximum concurrent uploads.
#Edit - options in the config.yml
To limit the number of concurrent uploads
storage2.max-concurrent-requests: 5 <or what ever you think is sensible>
To pre-allocate the temp files
pieces.write-prealloc-size: 4.0 MiB
As a side note, seeing your utilisation on F: at 100% with 8.2 MB/S doesn’t look healthy - is your disk virtualised so the I/O stats aren’t real ? I see you are using MS SSD, is that into a disk pool with more than 2 disks by any chance ? the latency looks low though, are you running any Dedupe or virus scanner on that volume ? If your happy with that performance it’s cool, but I know from my node that you will be missing lots of uploads as your will be dropped as being slow, and potentially if your disk is maxed out, when Audits come along you could start failing, or more commonly your online score will drop although your node is online