Disk usage discrepancy?

My drive has a 4k block size. I have around 20 Nodes, but only this one specific node has this issue. Some discrepancy seem to be normal due to block size an smaller files, but not this huge.

How I can check, if the filewalkers have been finished? Due to the discrepancy happened since the testing, I don’t think this is related to untrusted satellites, at least not in this extend. I have 4x6TB nodes which are all full now, dashboard shows 100%, OS shows 100% too, so in this case this seem to be the issue of just this particular node.

To check a progress:

However, you also need to search for errors related to a filewalkers (so search for error and walk).
You also need to search for databases related errors (search for error and database). Because the dashboard will show incorrect amounts on the piechart, if the databases are not updated.

I’m using docker. Is there a way to open a command line in the storj container and check it there? A few days ago, I stopped the container, changed the logsize to 10mb and enabled piece-scan on startup. I hope 10mb worth of logs will give me enough information, since I don’t want to start the node again (start over). How do I know if the DB is malformed? Do I also have to check the logs?

This doesn’t matter too much, what you need to do is to replace Get-Content pathToTheLogs with docker logs storagenode 2>&1.
See examples for docker:

If you use a PowerShell, then you can replace grep to sls.

two ways - filter logs for error and database, i.e.
for the bash and docker:

docker logs storagenode 2>&1 | grep -i error | grep database | tail

for the PowerShell and docker:

docker logs storagenode 2>&1 | sls error | sls database | select -last 10

or using this guide:

However, I would suggest to check logs anyway. There are could be other errors like “database is locked”.

When I restart the storage, the capacity is reset. The actual used capacity is full.
It appears as if there is space left, so ingress is still being received. Is there a way to set it to the actual used capacity?


Screenshot_4

Alright, I’ll give it a try. What I know, is like you mentioned, I had lots of “database is locked” back then, due to a slow pcie Sata card (it used lots of multiplexers behind multiplexers) and mostly on startup. This got fixed by a better pcie card, but on startup there is still “database is locked” sometimes. Is there something I can do in this case except getting a faster drive?

In that case only to move databases to a less loaded drive, preferably SSD:

Otherwise this would be a waste of time. If the filewalkers are unable to update databases, they will remain with a previous outdated data and you will continue to see a discrepancy on the dashboard.

When moving the db to another drive, does this still work with the multinode dashboard? Because the multinode dashboard looks at the db and shows the used space.

изображение

Это к какому из багов относится? И будет ли он решён?

Doesn’t look like a bug. Do you know what “average” means?

And chart about what then?

Thanks for the welcome Alexey! :slight_smile:

I ran the command provided above in the link to remove untrusted satellites. Not sure how long the removal of data will take nor what I am looking for within my logs. I don’t see any “Error” statements (I’ve downloaded the last 1k lines of logs, perhaps I need more?). Any additional links you can provide would be appreciated as the size of the dataset hasn’t shrunk.

Where does the Multinode Dashboard fetches the data for node capacity? I found an issue, that the HDD size on the multinode dashboard gets shown much less, for example my 16.3TB are beeing shown right at the regular dashboard, but on the multinode dashboard it gets shown as 11.54TB. Why is that? Take a look at the pictures below, they are both the same Node, one shown via the direct link (14002) the other is selected in the Multinode Dashboard. I also noticed, that my 102TB are now just 92TB, so the Multinode Dashboard seem to miscalculate the drive size.

node6multi
node6orig


I made node restart, And saddenly it started to show 600 GB free space.
But in reality
Screenshot 2024-06-19 180024
it started actively onboard new space.
is it possible to start something that will get it back to reality?

1 Like

You circled the average and current value. On the 1st you had 1TB and now 6.75TB. so 3.22TB on average sounds about right. If you’re wondering about the graph, then there are a lot of threads here already. It’s just about the satellites not reporting back in time.

This needs to be fixed aswell, if the system is reporting less then 100GB or something the node should not continue recieving data even if the reporting say something else.

1 Like

I used the command now for 4 days but the space does not increase.I had to limit the space to 9TB because storj just fills the hard drive with no limit. Any other ideas how i can delete the used space which is not shown in the dashboard?

@Alexey

Do you know how to fix my issue?