Delta between used space on disk and used space on dashboard

Hi mates,

why does my 5,1 TB disk say it’s full but my storage node has only 4,1 max space configured? Can I equalize this gap? Seems like I have a lot of unused stuff lying in my storj folder.

Please help!
Alfred

On what OS are you?
And what do you mean by disk saying it’s full?
This disk space refers to your configured disk space and not to the real disk-space on the hard drive.

4 Likes

It’s on a raspberry pi with external hard drive:

dev/sda2 5,1T 5,0T 67G 99% /media/pi/LaCie

Is there a way to clean up my whole drive or storj folders?

I am also asking, cause I fell like this is the reason why the node is crashing from time to time.

There is no way to clean up this disk from Storj’s data without disqualification.
The only exception is temp folder, see

What’s your allocated space in the STORAGE option of your docker run command?
You should have a reserve of 10% when you configured an allocation, for the 5.1 TiB disk it should not be more than 5TB.

From the dashboard it looks like your STORAGE=4.07TB, and 4.06TB is used, so no discrepancy here.

Please show result of this command:

df -T --si

Hi Alfred,
I allocated only 4.1 gb for the reason you mentioned. I want to increase, but it’s already full.

Here my drive spaces and my LS of mounted drive:

df -T --si
Dateisystem Typ Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root ext4 16G 11G 4,7G 69% /
devtmpfs devtmpfs 450M 0 450M 0% /dev
tmpfs tmpfs 485M 0 485M 0% /dev/shm
tmpfs tmpfs 485M 38M 448M 8% /run
tmpfs tmpfs 5,3M 4,1k 5,3M 1% /run/lock
tmpfs tmpfs 485M 0 485M 0% /sys/fs/cgroup
/dev/mmcblk0p1 vfat 265M 51M 214M 20% /boot
tmpfs tmpfs 97M 4,1k 97M 1% /run/user/1000
/dev/sda2 exfat 5,1T 5,0T 54G 99% /media/pi/LaCie

CD /media/pi/LaCie
ls -laSh
insgesamt 484K
drwxr-xr-x 11 pi pi 256K Mai 24 03:09 .
drwxr-xr-x 2 pi pi 256K Mär 28 2021 ‘$RECYCLE.BIN’
drwxr-xr-x 2 pi pi 256K Mär 28 2021 FOUND.000
drwxr-xr-x 2 pi pi 256K Okt 3 2020 .fseventsd
drwxr-xr-x 3 pi pi 256K Mär 28 2021 identity
drwxr-xr-x 4 pi pi 256K Okt 7 2020 orders
drwxr-xr-x 3 pi pi 256K Okt 3 2020 .Spotlight-V100
drwxr-xr-x 6 pi pi 256K Mai 24 06:55 storage
drwxr-xr-x 2 pi pi 256K Mär 28 2021 ‘System Volume Information’
drwxr-xr-x 4 pi pi 256K Mär 28 2021 .Trash-1000
-rwxr-xr-x 1 pi pi 32K Mai 23 20:35 revocations.db
-rwxr-xr-x 1 pi pi 8,4K Okt 7 2020 config.yaml
drwxr-x—+ 3 root root 4,0K Mai 23 19:57 …
-rwxr-xr-x 1 pi pi 1,4K Mär 3 10:00 trust-cache1.json
-rwxr-xr-x 1 pi pi 1,4K Mai 24 03:09 trust-cache.json
-rwxr-xr-x 1 pi pi 253 Mär 28 2021 docker.txt

In total? That does not look related to storagenode. Are there files in found.000 from crashes?

I am having an even more drastic example like yours.
I am currently investigating, so I can’t tell yet why I am seeing this:

A 16TB partition says it is 100% full on Linux df -H. But the node dashboard says it has only around 8TB of node data on it.

As said, I am currently trying to figure out where does that discrepancy come from.
You could do a du -sh to have the folder sizes on disk calculated but it might take some time.

Ncdu might also give some insights: NCurses Disk Usage

Possibly inodes? df -i

Unfortunately not. Inodes are fine: IUse 9%
Also fsck could not find any issues.
It is really weird.
I am currently trying to get the real folder sizes to see if the node dashboard may be wrong.
But with that few INodes occupied I don’t believe it is.

I am just scanning this found.000 folder, already 300 gb found. Maybe it’s it! Can I just delete it?
Why is this not automatically deleted?

In the end I deleted it, but the disk has still same load.
Weird. Am I idiot?

This is the reason. exFAT uses big cluster size, thus space wasted on it. It also not reliable and can be corrupted any time. I would suggest to backup all data from it and reformat it to ext4, then restore data.

Yes, you can delete. This folder means the drive was previously used on a Windows machine and chkdsk recovered orphaned data.

Also, the mountpoint /media/pi suggests that the disk is not mounted in /etc/fstab. But I’m not sure as I don’t know how raspbian does things.

and then

sudo rm -rf '$RECYCLE.BIN' 'FOUND.000' '.fseventsd' '.Spotlight-V100' 'System Volume Information' '.Trash-1000'

also check filesystem if you haven’t yet.

Already done this, no change in the end…

Hi mates, hi alexey,
Is there a way to reformat my disc per remote? Node is located 700 km away, And I don’t have any other disc over there…
I think it’s time for a graceful exit unfortunately…
Br
Alfred

For sure, but graceful exit is only possible after 6 months and might fail. If you’re not dealing with audit fails and much down time, you can also opt for accepting the situation for a while till you’re around.

How big is your database folder btw?

This is also the reason I quit on btrfs and xfs, and stick to ext4 (mkfs.ext4 -f /dev/XdX does the trick). The first because it gave me too often errors in the log, although the fact you can only duplicate the system extents and also can kind of create a backup of the structure of the filesystem using snapshots made it very appealing at first sight. The latter because xfs_repair takes too long and once even deleted my full blobs folder.

Besides I also make sure that the dbs reside on another filesystem than the data, because otherwise the disks got quite fragmented and the database went corrupted more than once.

As an ultimate measurement, you could consider to remove the thrash folder in order to get some space for the time being. Also lowering the space attributed to STORJ might help.

Already rebooted in the time between, to make sure you’re not looking at cached free space?

Don’t do this, the node could be disqualified for losing data.

this is not needed, you may also call it like this:

df --si --sync

@Alfred Right now without a reformat the only way is to reduce an allocation to what’s used or even below that. If the customers decide to remove their data, it may reduce used space on your node.

You could try to go with gparted or LVM, but you will lose data and/or node in case of any issue with hardware, power supply or on user error, see my adventure on such case: Moving from Windows to Ubuntu and back

For sure, it might. But shouldn’t happen ever since trash contains deleted pieces that shouldn’t be audited unless a satellite recovery took place just after removal of trash. And if you’re such with your filesystem because it’s full, it might be an ultimate measurement in my opinion, posing only a very low risk on DQ.

Essentially, the TS wants a inplace conversion from exFAT to ext4 or something. Since the disk is nearly full, I don’t see any visit options to do so. By if you see one using gparted (which -as fast as my knowledge stretches- means data loss by default, if you haven’t shrunken the underlying file system; which is not possible with exFAT and a full drive) or LVM, enlighten us!

My advice would be to carry on for a while, till you’re around to fix it at location. Or just accept the situation of being unable to use the full size, may be partially due to the cluster size (at least 4KiB by default) of exFAT as Alexey already suggested. Which means an average loss of 1-2KiB per file. Although, taking one node here if currently about 400GB in 2000000 files, it would waste not much more than 40GiB (calculation:

2E6 / 400E6 * 4E9 * 2048/10243
). So there must be another explanation.

Dit you already run fsck.exfat / exfatfsck from the exfat-utils package?

You are right, exFAT is impossible to shrink, I forgot.
So no options to convert in place.

The default cluster size on exFAT is 128KiB: Default cluster size for NTFS, FAT, and exFAT - Microsoft Support
So, it will waste a lot of space, you may check,
size of the content:

du -s --si --apparent-size /mnt/storj/storagenode/storage

and compare with the real usage:

du -s --si /mnt/storj/storagenode/storage
2 Likes