Folder size is different from dashboard

Hi,
I have a node which has been running from almost a year - when the tests started. Currently i have in dashboard used space of 3.6TB, but on disc i have blobs folder with 4.6 TB.
I have checked couple of times *.dbs and fixed reported errors, but the space still differs.

Occasionally I am having this type of error in the log file, but in general everything looks OK with upload/download traffic

2020-04-03T07:50:32.773Z ERROR piecestore could not get hash and order limit {“error”: “v0pieceinfodb error: sql: no rows in result set”, “errorVerbose”: "v0pieceinfodb error: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Get:132\n\tstorj.io/storj/storagenode/pieces.(*Store).GetV0PieceInfo:650\n\tstorj.io/storj/storagenode/pieces.(*Store).GetHashAndLimit:430\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:577\n\tstorj.io/storj/storagenod

Any ideas how to delete unused files? I run the node on linux/centos7
2020-04-03T07:50:32.773Z ERROR piecestore could not get hash and order limit {“error”: “v0pieceinfodb error: sql: no rows in result set”, “errorVerbose”: "v0pieceinfodb error: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Get:132\n\tstorj.io/storj/storagenode/pieces.(*Store).GetV0PieceInfo:650\n\tstorj.io/storj/storagenode/pieces.(*Store).GetHashAndLimit:430\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:577\n\tstorj.io/storj/storagenod

Any ideas how to delete unused files? I run the node on linux/centos7

They’re not unused files. It’s a mismatch between the database and the actual files stored on your node. You should still be storing them and you’re also getting paid for them. Don’t remove anything, the issue is with the reporting, not the files themselves.

2 Likes

Hello,

I have my node that store the data on an ISCSI disk, looking my dashboard my used space is 3TB:

but checking on the disk is 5.5 TB

image

why this big difference ?

I have to purge some data ?

I already reported this bug.
You should never purge any data on your own. Only satellite can delete data from your node!

this means that the data shown on dashboard is wrong or the file on my disk that should be deleted has not been for a bug ?

Just checked my dashboard vs the numbers my file system gives me.
i got 770 GB stored, but my dash puts me at 970 GB… doesn’t seem hugely inconsistent…
i mean the documentation says to account for 10% extra for database or overhead i forget the exacts…

My logs are nearly flawless, so i will assume it’s not due to database issues… tho i did get a few major deletions today, so maybe that’s why…

there is or can be used an awful lot of space on overhead… similar to when you format a drive, you lose capacity to sector size on the disk, then you slap the file system on top of that which again gives you overhead, and then on top of that you got the storj using most likely tons and tons of files.
stuff like that can lead to massive overhead… my system is on my current array having like 30% overhead and thats before the overhead for the files comes into play…

going to make sure i get 4kn sector sized drives for the next array, which should take my overhead down from 30% to more like 20%

3.6TB taking up 4.6TB of drive capacity is far from unusual, depending on how your filesystem is configured…its a bit in the high end of what is good, i would suggest, looking into what your minimum filesize is and compare it to your avg file sizes of your storagenode.

so yeah its really just about how you do the math… often a folder can have like 3-4 different sizes depending on how one looks at it.

what does this even mean…

please tell us that you didn’t start changing information in the
[storagenode dir]/storage/*.db files, tho that would sort of explain the strange log messages…

You shouldn’t touch anything in those folders, in general the most changes you will be making to your node is related to configuration, like the docker launch command and the config.yaml

if you did change any *.db files, trying to correct it now will not make anything better… best just to keep your finger crossed that the software will correct itself after a while…

that the node is still running is a good sign, can’t be all bad then…and if it isn’t programmed to correct it, then i’m sure it will be sometime in the near future.

another thing that could be happening is much like zfs will keep stuff in RAM, because it doesn’t take long to delete it… your storagenode could have data selected for deletion, due to network rebalancing of data or some such thing… however if it doesn’t have any incoming data to replace what it wants to delete, and the data it needs to get rid of is still in circulation, then keeping it for a little while longer until it can be replaced, makes perfect sense…

gives the SNO better odds of profit, gives the tardigrade user odds of faster access to the data and it improves data redundancy for that particular customer… and if the space is needed fast it simply deletes it…

i’m not saying that’s what happening… just saying this might be far more complex than one might think, most stuff usually is…

really you shouldn’t worry much about it… just keep shuffling in new drives as needed xD