Wrong used & remaining disk space

I rebooted into systemrescure, ran xfs_repair, it took 3 hours to complete, then rebooted into OS, restarted storagenode several times, nothing changes. :frowning: I think I’ll just wait for 19 days and see anything will change after monthly tasks running.

Sorry about all inconveniences!
I cannot reproduce it though…

Hi. It’s ok. Just some $ per month. I’ll update if I see sth new. Thank you man.

Update: The free space is a bit lower now. I didn’t do anything. Lol.

did you check your payout information for the past period?

I remember it’s 2TB disk average month but I’m not sure. I’m outside now, I’ll check it when I can access to the dashboard. Thank you.

1 Like

Now things getting weired.
Used space does not change anymore. New screenshot from node dashboard:

While df is showing 8.6TB in use. I am starting to wonder if it is a database problem on the node? Which database would that be and how could I check that?

If you have checked your databases and found no issues, you could try to recreate the database piece_spaced_used.db.

What’s filesystem by the way?

I have just replaced piece_spaced_used.db now with a freshly generated one.
First of all the node would not start. In the logs it complained it could not get the used space but complained that the minimum space requirement was not met. :thinking:
I had to manually edit the config.yaml to reduce the minimum, then it would start.
The node currently shows all capacity is free, with slowly increasing used capacity. I guess it will take some time until the used space got re-calculated?

I am wondering, if a full node would still report as full to satellites while recalculating?

File system is ext4.

That’s correct.

It likely will report a wrong free space until finish calculating. However, it should reject uploads if there is less than 500MB.
However if you have some other process which could use the same disk, you may have this issue

I see. So these 500MB are from the OS free space. That’s fine.

But this

does not make sense to me. This node has 8.5TB of data and it complains that minimum is not met and refuses to start. I understand that is somehow checks remaining free space but for an existing node that minimum check seems not to be required.

1 Like

I just checked it. The payment is for 2TB average disk space. But I think I’ll just leave it, after the proposal I don’t have enough motivation to work on this. Thanks @Alexey for your time bro.

1 Like

Well scanning 9TB of millions of 1KB 2KB files will take 1 month for HDD…

When database is empty, it cannot know that used space belongs to this node, but it see the not enough free space, assuming that you start a new node.

It just calculate used space with analogue of du --si

That’s the point where it fails.

bro please update this thread if there’s sth new.

It will take…time.

I think you’re just seeing new incoming data be accounted for. The file walker will update the total once it’s done all at once. Not gradually.

1 Like

Let’s hope that this is what it is doing.