Node restart reseted used disk space info on dashboard

This is my first node ever. I had to restart it, and its used space went back to 3 GB after previously having 40 GB.

I searched the forum for this issue, and it seems to be common. However, I couldn’t determine if it fixes itself or if I need to do something to resolve it. At the momment it is up for 16hrs and the dashboard does not seem to have fetched that info only the new data.

Another thing I find odd is that I was vetted by two satellites within 48 hours since start of the node.

Best regards.

You don’t need to do anything. Keep node online. It’s pretty hands-off deal. Numbers in the dashboard don’t affect anything (data storage or payouts), its a purely cosmetic feature.

I was wondering whether it is a back-end coding mistake by the devs or not.

Dashboard should track the disk usage of all time and not the current node session disk usage. Unless it was purposely designed like this, which is a feature I dislike.

It does. It works very well. But data is stored in SQLite databases. If the machine is not gracefully shut down the state may be lost. If the database is used over the docker mount on Windows — all integrity guarantees are out of the window [pun not intended]

So, you cannot rely on information there unless you ensure the stability of your mounts (which, again, you cannot do over the kernel boundaries; and Docker on windows runs separate kernel)

If you want to avoid the issue you can either run node natively, or place database inside of the container (which will break their ephemeral nature, so not ideal). Or don’t use windows — that is the best solution.

The node still tracks actual available disk space so it won’t overshoot the quota.

1 Like

May have been deleted during your downtime, or because of your downtime. Uptime is key to success

1 Like

Files are still there Ive checked manually.

I use Debian 13 / Docker to run node. It wasnt shut abruptly, I had to restart it due to QUIC misconfig. It was when it reseted for somereason but files were kept there.

I manage the server via SSH and I was using a VPN but I was having issues with QUIC because of it so I had to remove the VPN.

So the host OS is Debian and docker runs there? How is the mount where databases are located configured?

Restarting a node shall have no ill effects on anything. It happens every update anyway.

You can also throw away docker and run node directly. It’s a go binary, no dependencies, docker does not buy you anything but headache.

You can ignore quic misconfiguration message. Or you can specify interface explicitly and keep the port constant across forwards, if you want to try to make it work, but I would not bother.

1 Like

This is usually related to databases.
So, please check and fix the corrupted ones:

or you may re-create a corrupted ones:

If all databases are ok, then you may rescan pieces following these steps:

If you use a hashstore backend you do not need to do anything, it will correct itself after a while.

1 Like

So will piecestore won’t it? Once the filewalker does a scan.

1 Like

If the database is corrupted - unlikely. As far as I know hashstore works a little bit differently. Not sure, that it uses the same databases to store stat as piecestore.

1 Like

You are the man! Glad we have you here, is there any discord server or any way to real time chat with community?

Just to clarify, verified the *.db files for corrupted files, and it was all ok no errors at all. Proceeded with the steps @Alexey provided and solved the issue.

I’ve solved the issue with the steps @Alexey provided, still how can I run storage node as a system service instead of docker? I can’t find any documentation about it at all, anyways I’m a bit blind so yeh.

Thank you anyways.

Is not forum fast enough? People answer when they have spare time. Discord is ill-suited for this.

It’s important to understand that that’s in no way a solution; that’s damage control. The problem is still unsolved. It happened once, it will happen again. The solution would be preventing the databases from getting corrupted.

To understand why are they getting corrupted, I still need an answer on this question:

Or you can read this, and see if anything applies in your usecase: How To Corrupt An SQLite Database File

You can look up how storj official container does it. Or you can read storagenode --help and construct .service file yourself. I’ve done this for rc.d on freebsd here: GitHub - arrogantrabbit/freebsd_storj_installer: Installer script for Storj on FreeBSD. You can adapt command lines from there too.

Can you explain better what do you mean?

I have it “perma” mounted with fstab, idk if this is what you wanted know.

It is good enough, usually there is always a discord server, therefore my question.

I figured that out. I have no clue how to find the reason for that to happen. I have looked into logs but no errors related to that matter whatsoever.

Please show a result of the command:

df --si -T

Also, did you move databases to a different drive?

1 Like

I did this:

Which happens to be in the O.S. drive.

So, this is a local drive, perhaps USB, and the databases are on the same disk.
So, perhaps you have had database is locked errors, which may result in inability to update the database with the actual stat.
It could recover itself after the next update (when the node will be automatically restarted, and the filewalker will be triggered), or like you did using my suggestion. This database is a cache, so not a big deal to recreate it.

Not that I have noticed, I use this command to check sudo docker logs -f storagenode | grep -i “Error”

About usb drive I’m not using any, must be a partition created automatically for some reason.

If this happens to occur again, I will probably create a bash script to autonomate the process.

Until now everything seems to be ok, and going.

It’s not required. You may just allow the node to continue and eventually it should catch up with the usage.

Since you didn’t redirect logs to the file, it may lose these logs when you re-created a container or just got removed due to a default limit of 100MB for the docker local log driver.

1 Like