I have been having some trouble with my node lately. I first noticed getting alerts that my node was down and when I went to log in it wouldn’t take my user password but I could log in as root. A reboot seem to fix it. Then it started to happen more often. I started to Google the issue and found something saying to check file space on root.
I was able to get into terminal recovery mode and found that my root was 100% used. I ran several commands to cleanup space, deleted tmp and other stuff and got it down from 100%.
That seemed to help but it is starting to happen again and my node went down again and I found the root back at 100%. I don’t have any space to add more for the root (that I know of or how to check).
Is there any way to check for free space on the /dev?
When I started tonight, the reboot wouldn’t even load into the OS. I had to go back into recovery mode and delete stuff to get it to reboot. I am now in the GUI side as root. When I run df -h I get:
You can do something like du -ha / | sort -hr | head -n 20 to get the folders which occupy the most space. If you have located them then you can dig deeper what is eating your space.
Please check journald logs, the /var/log/journal directory. journald usually rotates them, but I found the defaults lacking and this directories sometimes grow just too much. You can remove any files that have the @ character in the file name, this character signifies the file is historical.
Please install ncdu (with apt install ncdu) and use it to investigate which other directories grow, I found this tool the best available to investigate disk space usage in text mode. If you can’t, @jammerdan’s option is the second best, I guess.
How small is your system disk? I assume the logical volume resides on sda. Together with /boot and /boot/efi I can only see 20GB used. Is there any unallocated space left on that disk? Then just extend the logical volume.
I saw this too late, install ncdu and run it, it super duper fast, also there is a practice create a ballast file 500MB on disk, when you need it the most, delete that file.
Lots of usage on that docker overlay file system. Either a docker container is using a lot of disk, or you do not have log rotation so logs are blowing up the disk space.
filesystem could report full when run out of inode and not disk space (not in this case - just a general advise).
maybe don’t use desktop packages in a tight disk like this (I saw lightdm).
In the past I’ve a tcpdump process to /tmp and forget to kill it, it end up eating all diskspace, when you have some diskspace to breath, be sure to run df once in a while just to be sure nothing is eating it up.