Fixing the error "database disk image is malformed" take forever

I am trying to fix the error “database disk image is malformed” with the instruction from this page
It is not my first time fixing the error but this time in the step:
sqlite3 /storage/bandwidth.db “.read /storage/dump_all_notrans.sql”
it take a lot of time it is about an hour for 1MB and the dump_all_notrans.sql file is about 2.76GB
sow it will take about 115 days by this time my node will be disqualified.
Is there smutting that I can do to speed the proses?
also in the previous time it toke mints.
I have tried to do the stamps in windows instead of Linux and it is about the same speed.
my node is about 15GB out of 20GB.

You can use tmpFS in the docker. See Used_serial.db malformed - #4 by Alexey

I am on Unraid and the response wan I am running that line is:

docker run -it --rm --mount type=tmpfs,destination=/ramdisk,tmpfs-size=2G --mount type=bind,src="/mnt/user/File Transfers/demo/storj/",dst="/mnt/user/File Transfers/demo/storj/t" sqlite3 sh

Unable to find image ‘sqlite3:latest’ locally
docker: Error response from daemon: pull access denied for sqlite3, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied.
See ‘docker run --help’.

You skipped the half of the image name. It should be sstc/sqlite3, not just sqlite3, also destination in mount seems wrong - the destination should be in the container’s filesystem, there is no "/mnt/user/File Transfers/demo/storj/t", but it will be created there and mounted. So, if it’s intendent - then make sure to replace /data everywhere inside the container to "/mnt/user/File Transfers/demo/storj/t", you also should switch to that directory after your console will be attached to there by command cd "/mnt/user/File Transfers/demo/storj/t"

Or, you can change the dst="/mnt/user/File Transfers/demo/storj/t" to dst=/data in your docker command

thanks it worked but now in the step sqlite3 /ramdisk/bandwidth.db “.read /ramdisk/bandwidth_dump_all_notrans.sql”

i get this error:
Error: near line 17081152: unrecognized token: “'20”

This is unfortunate case. Is the resulting database size zero?
If so, then your bandwidth database is more like lost and you need to follow this guide:

how do I check the size in /data mode?

ls -lh /data/

ok i did this steps:

docker run -it --rm --mount type=tmpfs,destination=/ramdisk,tmpfs-size=5G --mount type=bind,src="/mnt/user/File Transfers/demo/storj",dst=/data sstc/sqlite3 sh
/data # cp bandwidth.db /ramdisk/
/data # sqlite3 /ramdisk/bandwidth.db
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.
sqlite> .mode insert
sqlite> .output /ramdisk/bandwidth_dump_all.sql
sqlite> .dump
sqlite> .exit
/data # rm /ramdisk/bandwidth.db
/data # cat /ramdisk/bandwidth_dump_all.sql | grep -v TRANSACTION | grep -v ROLLBACK | grep -v COMMIT >/ramdisk/bandwidth_du
/data # rm /ramdisk/bandwidth_dump_all.sql
/data # sqlite3 /ramdisk/bandwidth.db ".read /ramdisk/bandwidth_dump_all_notrans.sql"
**Error: near line 17081152: unrecognized token: "'20"**
/data # mv bandwidth.db bandwidth.db.bak
/data # cp /ramdisk/bandwidth.db bandwidth.db
/data # exit

and in the end the file size 1.1GB but the original file size was 2.9GB
-can I use the new corrected bandwidth file?
-I hade about 15TB of data in my node dose it mean that i will lose more than half if i will tray to use the corrected bandwidth file?
-can’t I do smutting to jump over that error line?

This error suggest that dump file has a broken lines and cannot be imported at whole. So 1.1GB is what you was able to safe.

if you want to try to fix the broken dump, then you would need much more scripting and digging. If you think it’s worth your time to safe a bandwidth history - you can proceed, otherwise it would be simpler to re-create this database or try to use a half-recovered database (some history will be available, but some - not). Most of the history data could be received from the satellites during the work (except shutdown one like Stefan’s satellite).

So answering on your questions,

Yes, you can try. Some historic statistic will be lost though

No! You will not lose any data, only a historic stat.

You can edit it with the line editor sed (because the file is too great to open it in the text editors).

If you decided to continue to try to recover, you can use this command to see, what’s wrong with this line (I added a next line too in case if there is a line break):

sed -n -e 17081152,17081153p /ramdisk/bandwidth_dump_all_notrans.sql

Then you could correct it by skipping this line or fix it. But we need to see what’s in this line, so, please, post it.
To do not break formatting in your post, I would recommend to put extracted lines between two new lines with three backticks, like this:

extracted line N 17081152
extracted line N 17081153

here is the response:

INSERT INTO bandwidth_usage VALUES(X'f474535a19db00db4f8071a1be6c2551f4ded6a6e38f0818c68c68d000000000',5,2319360,'20

If I try to get the next line like sow:

sed -n -e 17081153,17081154p /ramdisk/bandwidth_dump_all_notrans.sql

I get nothing.

here is the previous line(17081151):

INSERT INTO bandwidth_usage VALUES(X'f474535a19db00db4f8071a1be6c2551f4ded6a6e38f0818c68c68d000000000',5,768,'2021-10-01 23:16:28');

Can’t we remove that specific line?

Of course you can.

sed -i.bak '17081152d' /ramdisk/bandwidth_dump_all_notrans.sql

i have removed every db and started the node
after it created the db stopped the node and copied back all the db
aside from the bandwidth.db and then started the server and this is the result:

2021-11-24T09:35:58.405Z ERROR gracefulexit:chore error retrieving satellites. {“error”: “satellitesdb: context canceled”, “errorVerbose”: “satellitesdb: context canceled\n\*satellitesDB).ListGracefulExits.func1:152\n\*satellitesDB).ListGracefulExits:164\n\*service).ListPendingExits:89\n\*Chore).Run.func1:53\n\*Cycle).Run:92\n\*Chore).Run:50\n\*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\*Group).Run.func2:86\n\*Group).Go.func1:57”}

2021-11-24T09:35:58.422Z ERROR collector error during collecting pieces: {“error”: “context canceled”}

2021-11-24T09:35:58.550Z ERROR piecestore:cache error getting current used space: {“error”: “context canceled; context canceled; context canceled; context canceled; context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled”}

Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory

Please, check your data location and your docker run command - looks like you have not only the corrupted databases, but also corrupted data.
I would like to recommend to stop and remove the container, check and fix disk errors first.

I don’t have disk error but I hade an unclean shutdown.
what can I do? other then start from 0?
is there a way to fined the corrupted files and remove them?

What’s the filesystem you have used on this disk? I hope it’s not btrfs…

it is Unraid and the file system is xfs

This suggest either data loss or a wrong path.
Could you show what’s in the folder "/mnt/user/File Transfers/demo/storj"?

ls -l "/mnt/user/File Transfers/demo/storj"

That is just used for all the fixing attempts
but in the storj folder (/mnt/user/storj/storage):

blobs/ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/
piece_expiration.db* ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/
piece_expiration.db-wal* v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/

If you are sure that this is a correct folder, then you can run the setup step again.
However you will need to remove config.yaml from there before run the setup, otherwise it will fail.
The setup will create a missing file-protector storage-dir-verification.
After that you will be able to run the storagenode. But there is a high chance that your node will be disqualified. Because this file cannot disappear without a reason.