i know this doesn’t exactly solve the current issue, but it should help not create the issue again the future.
i duno about how you can fix it… i don’t suppose one can defrag the partition like in windows… but that might take longer than actually copying the data to another disk.
because of head trashing
looks like this in the docker run command
Template
docker run -d --restart unless-stopped --stop-timeout 300 -p 192.168.1.100:28967:28967/tcp -p 192.168.1.100:28967:28967/udp \
#--log-opt max-size=1m \
-p 192.168.1.100:14002:14002 -e WALLET="0x111111111111111111111" \
-e EMAIL="your@email.com" -e ADDRESS="global.ip.inet:28967" \
-e STORAGE="4TB" --mount type=bind,source="/sn3/id-sn3",destination=/app/identity \
--mount type=bind,source="/sn3/storj",destination=/app/config --name sn3 storjlabs/storagenode:latest \
--filestore.write-buffer-size 4096kiB --pieces.write-prealloc-size 4096kiB