Mi nodo esta offline y no entiendo por que

As a workaround to keep the node online? yes,

BUT the node should NOT become more ingress this way, leading to a real full drive and unfixable problem.

So i strongly recommend NOT to allocate 5.2TB. Allocate 3.2TB to Stop ingress immediately!!!

I am courious, how big are the drives, ? how much is filled, and wich filesystem?
and are there any cache drives/ram involved?

All but two ext4.

Strictly adhering to: only use what you have, I even added two micro-SD cards to the mix of 7 SMR drives, 2 SSD’s and three CMR drives. One node is a combined node, using mergerfs with multiple drives (2xSMR, one SSD and one micro-SD; my first node ever).

These are located at two systems with Debian host OS, being used for other purposes.

They add up to a total size of about 32TB, filled up for 25.5+GB over less than 9 months. Most of the disks are filled up almost to the rim now, but essentially no more than 97% of real disk space (having allocated 95%) boiling down to at least 18GB of real disk space. More than enough to prevent fragmentation. Even using the aforementioned option and disabling root space, although I’m running the nodes with root user anyway.

All nodes have at least 2GB virtual RAM.

As i thought, no node near the data size you get problems with fragmentation. Its
(from my expirience)
at 5 TB+ at ntfs drives.

(for my cached node its even later, as it is not fragmented as usual, and plenty of free space. 2 digits TB)

no i dont change the folders in the configuration file, i just stop the node and move the db files to onother directory and start the node egain then the db files was recreated in the disk but nothing change about the used and free space on the HDD. So i dont know what to do.

“# total allocated disk space in bytes”
storage.allocated-disk-space: 16.30 TB

set this to 3.2 TB

save file,
stop the node
rename the storagenode.log to backup.log
check the temp folder in the data location for old files,
start the node

install ultradefrag free (not the reboot required options) and do MFT defrag, then normal defrag.
wait until filewalkers are finished.

post storagenode.log if problems occur

of course, because this info is taken from the (now) empty databases.
They will be updated only when the filewalker will successfully finish its job and update the database. Every restart of the node will force the filewalker to start from scratch. However, DBs would get updated with each upload to your node too, as a result - the completely wrong usage on the dashboard.

Please do not use Ultradefrag for a while, we have a suspicion that it could corrupt data, use a native defrag tool instead.

Oh! Is it anything more than a onetime case?

We have several nodes with corrupted pieces and investigating an issue.
All of them are Windows (except one though), and some used Ultradefrag.

We didn’t finish the investigation, but I would suggest to do not use Ultradefrag for a while, until we can figure out each case.

did they use ver. 7 (free) or 11 (bought)?

I do not know details yet. Just do not use it for a while.