Negative Disk Space Remaining

Hello,

I have a question about my node. Since last month, the Disk Space Remaining on Web dashboard changed to the negative value. Can someone help me to solve the problem?

1.This could be cause you were offline for a long period of time and the deletes finally caught up and you have 1.25TB of garbage.
2.Your node could of been to slow to delete files and the garbage collector caught it.
3. You allowed more amount of storage, then changed it to less later.
4. Your database isn’t calculating right.

You can figure that out by seeing how big the garbage folder is.

1 Like

I saw negative disk space on several 100% online nodes - though it was a couple of versions back.

There was an issue that could cause high amounts of trash. I think that’s fixed now.

maybe the node was only updated recently… if its on manual it can easily skip a few versions.

check that you got space one the drive
Keep calm and continue storjing…
don’t worry you are fine… something would have caught fire by now if it was a big problem xD

ofc if it doesn’t go away in the next week or so i would for sure investigate it further…
basic checks of your node’s viability is never a bad idea… check that everything else is in the green.

Thanks for your reply.

  1. Yes. The node is offline about 1 day.
  2. No. It already took 1 weeks, and it still -1.25 TB Disk Space Remaining.
  3. I don’t have another 1.25 TB on the disk to add the amount of storage.
  4. Yes. I think the data base isn’t calculating right.

I checked the size of garbage folder . It’s only 4 KB.

Just check the log, and I found the errors like followings.

2020-06-05T16:26:57.307Z ERROR piecestore:cache error getting current space used calculation: {"error": "lstat config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/sf/erqznwv4e4xiwtbmdtbuoiqw62t2cx3h75neqvegb6d5akr2ga.sj1: structure needs cleaning; lstat config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/g4/qpwdaacnrdwtks53ahv4u6dhdhs6pkfcsdg7crfheo7goncyfa.sj1: structure needs cleaning", "errorVerbose": "group:\n--- lstat config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/sf/erqznwv4e4xiwtbmdtbuoiqw62t2cx3h75neqvegb6d5akr2ga.sj1: structure needs cleaning\n--- lstat config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/g4/qpwdaacnrdwtks53ahv4u6dhdhs6pkfcsdg7crfheo7goncyfa.sj1: structure needs cleaning"} Error: lstat config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/sf/erqznwv4e4xiwtbmdtbuoiqw62t2cx3h75neqvegb6d5akr2ga.sj1: structure needs cleaning; lstat config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/g4/qpwdaacnrdwtks53ahv4u6dhdhs6pkfcsdg7crfheo7goncyfa.sj1: structure needs cleaning

2020-06-05T16:37:06.473Z ERROR orders archiving orders {"error": "ordersdb error: database disk image is malformed", "errorVerbose": "ordersdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).archiveOne:238\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive:202\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches.func2:238\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches:255\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func1:189\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

I think the database is malfunctioned. Do you know how to rebuild the database?

No. I just check the garbage folder. It’s only 4 KB.

Looks like your having file system issues. Please check your file system for errors.

Yes. I run the fsck and it did find and fix some errors. I will rerun the storj. Hope it can be fix after few days. Thanks for your reply.

you should check the smart of the hdd, to see if it is a hardware issue…
ofc a power outage or similar can create write holes in your database depending on what type of setup you are running…

pulled the wrong drive the other day from my raid array, thought it was the bad drive… and it wasn’t so i was down to a raidz1 with 4 drives having only 2
it was very unhappy about that and claimed a few of my databases sustained damage… ofc that just on a Copy on Write file system just means it’s slightly older version of the file/db…

i assume my slog caught it because after a bit of nursing it all seemed right as rain.
and i ofc put the drive right back… also had turned off sync always… so that didn’t help, im sure and not long before i had pulled my l2arc to see if that would help my performance…

so it was kinda fun to see my data almost taking a direct hit… getting a write hole in a db can be devastating tho, so you really want to be sure that your setup is semi safe for long term operation.

which is partly why i am so mean to my system… rather have it die on me early on than learn hard lessons later.

One question. Can what’s inside the trash can be removed manually, or is it removed over time? Right now I have 169MB.

is the risk worth the reward… the stuff inside the trash the node will delete all by itself when it’s good and ready…

you really shouldn’t tinker with anything inside the node, better to spend the time on ensuring that it doesn’t die or get damage due to bad prep or configuration.

No no, it was simple curiosity, with the problems of one guy and another that people report, it’s better not to touch anything at all. Besides, I don’t have any kind of space problem. Thank you!

people tinker… its dangerous… don’t fix stuff until its broke…
but in this case my money would be on that he is running a regular file system and had some sort of power outage or similar crash / shutdown… leaving a writehole in his database file.
should be a minor fix… if it doesn’t just auto corrects it…

it really becomes the question of if a problem is likely to solve itself or if it will escalate with time into something more serious.

risk vs reward calculation

personally i will to a fault not touch anything inside the storagenode… its a big house of cards and even one little permission i change might escalate into some serious issue over possible years of operation, so it’s best to keep it as close to stock as possible… so that when the storj crew applies changes in the future the node will have predictable results…

sure their quick fix database trim… what was it called washing… whatever it was they did to their databases a lot recently… might make their system a little faster right now… but who knows how it will affect their storagenode in a year… the point where you really hate to have the problem.

keep a good eye on the node behavior, consult the storjlings if advice is needed, and plan for failure, something will fail or go bad … its just a matter of if you can predict what it is before it happens and avoid/mitigate it.

Update: After I run “fsck” and fix some errors, the negative disk space remaining issue is fixed. Thanks for your help.