Hello I have updated to version 1.3.3 and everything has been ok.
But today when reviewing my dashboard an error appeared in the log, I have read that sometimes this error is fixed, doing a vacuum of the database, how is that done?
I noticed that the disk usage only updates twice per day on my node, don’t know if that is intended. I saw some days with a big mismatch between remaining disk and disk usage.
2020-05-03T01:00:43.841Z ERROR piecestore failed to add order {“error”: “ordersdb error: database is locked”, “errorVerbose”: "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue…
I restarted the node, it seems to be running just fine now, besides the dashboard showing wrong data. I wonder if that is only the local representation or am I loosing income or something?
Im seeing a similar disk space used graph issue on one of my nodes myself. I had no downtime.
I didnt find any “database locked” records in the logs though.
Here’s the procedure I use to vacuum and check the databases:
Stop the node
Vacuum the dbs
integrity_check the dbs
Restart the node
Here’s a bash check which should do that for you. Please change the database directory to reflect where yours are on your system.
docker stop -t 300 storagenode &&
dbs=$(ls /opt/storj/storage/*.db)
c1="VACUUM;"
c2="PRAGMA integrity_check;"
for i in $dbs
do
sqlite3 $i "$c1"
done
for i in $dbs
do
sqlite3 $i "$c2"
done
docker start storagenode
The output looks like this:
# ./vacuum-test.sh
storagenode
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
storagenode
If it works for you, and you have a low spec-ed node, it might be useful to run the script once a week.
You can automate that task using cron:
crontab -e
And add the following line… changing the script location as appropriate… to run every Sunday at 6:30
I have been suspended on several nodes and indeed found database locked records in my logs.
I have now run your script less than 2 hours ago and, since then, I have received again a few dozen database locked errors. But perhaps it is now much less errors than before I ran the script.
Anyway, my dashboard still shows that I am suspended on a couple of nodes and I would like to know whether running this script has produced the intended effect.
How long does it usually take for a node to move from suspended mode to regular operation? In other words, what is the process undertaken by satellites to check that a node can revert to regular operation?
As the situation was getting worse (I got eventually suspended on all satellites), I unmounted the storj partition, converted it from ext3 to ext4 (dunno whether it really makes a difference?), ran e2fsck -fD /dev/sdX, ran the VACUUM/PRAGMA script again and now it seems that the node is working again perfectly. Hopefully this will last for a bit!
EXT4 has numerous improvements from EXT3. Including online defrag and support for large filesystems. EXT4 is much more appropriate for SNO purposes than EXT3. Your change was probably necessary.
without doing anything special.
Maybe it’s a writing problem when storj write data in the database in a same time.
I hope you will find a solution to correct it.
If i need do something tell me.
Have a nice day.
yeah i also get that, generally when the node is doing clear up or just when booting…
might be some timeout that is set a bit to low if i was to hazard an uneducated guess.
For my part it’s happens quite often.
Like today between 2020-05-09T00:00 and 2020-05-09T18:41 it’s happen 884 times for the satellite
118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW
im having the same problems regarding the “database is locked” and tried @anon27637763 s vacuumscript.
I seem to miss something trivial, as I’m getting this error:
sudo sh vacuumtest.sh
storagenode
: not foundsh: 1: vacuumtest.sh:
vacuumtest.sh: 6: vacuumtest.sh: Syntax error: word unexpected (expecting “do”)
this is the .sh im running:
docker stop -t 300 storagenode &&
dbs=$(ls /mnt/storj8/storage/*.db)
c1=“VACUUM;”
c2=“PRAGMA integrity_check;”
for i in $dbs
do
sqlite3 $i “$c1”
done
for i in $dbs
do
sqlite3 $i “$c2”
done
docker start storagenode