After this update i have some zeros

These commands were entered in shell.
See print screen.

Maybe there’s something different about your version. I have SQLite 3.33.0 running on Debian GNU/Linux.

What’s the output of .help ?

Does it include .mode in the list?

If .mode is in the help list, is “insert” included in:

.help mode


Here’s my output:

sqlite> .help mode
.mode MODE ?TABLE?       Set output mode
   MODE is one of:
     ascii     Columns/rows delimited by 0x1F and 0x1E
     box       Tables using unicode box-drawing characters
     csv       Comma-separated values
     column    Output in columns.  (See .width)
     html      HTML <table> code
     insert    SQL insert statements for TABLE
     json      Results in a JSON array
     line      One value per line
     list      Values delimited by "|"
     markdown  Markdown table format
     quote     Escape answers as for SQL
     table     ASCII-art table
     tabs      Tab-separated values
     tcl       TCL list elements

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic

sqlite3
SQLite version 3.22.0 2018-01-22 18:45:57
Enter “.help” for usage hints.
Connected to a transient in-memory database.
Use “.open FILENAME” to reopen on a persistent database.
sqlite> .help mode
.archive … Manage SQL archives: “.archive --help” for details
.auth ON|OFF Show authorizer callbacks
.backup ?DB? FILE Backup DB (default “main”) to FILE
.bail on|off Stop after hitting an error. Default OFF
.binary on|off Turn binary output on or off. Default OFF
.cd DIRECTORY Change the working directory to DIRECTORY
.changes on|off Show number of rows changed by SQL
.check GLOB Fail if output since .testcase does not match
.clone NEWDB Clone data into NEWDB from the existing database
.databases List names and files of attached databases
.dbinfo ?DB? Show status information about the database
.dump ?TABLE? … Dump the database in an SQL text format
If TABLE specified, only dump tables matching
LIKE pattern TABLE.
.echo on|off Turn command echo on or off
.eqp on|off|full Enable or disable automatic EXPLAIN QUERY PLAN
.excel Display the output of next command in a spreadsheet
.exit Exit this program
.expert EXPERIMENTAL. Suggest indexes for specified queries
.fullschema ?–indent? Show schema and the content of sqlite_stat tables
.headers on|off Turn display of headers on or off
.help Show this message
.import FILE TABLE Import data from FILE into TABLE
.imposter INDEX TABLE Create imposter table TABLE on index INDEX
.indexes ?TABLE? Show names of all indexes
If TABLE specified, only show indexes for tables
matching LIKE pattern TABLE.
.limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT
.lint OPTIONS Report potential schema issues. Options:
fkey-indexes Find missing foreign key indexes
.load FILE ?ENTRY? Load an extension library
.log FILE|off Turn logging on or off. FILE can be stderr/stdout
.mode MODE ?TABLE? Set output mode where MODE is one of:
ascii Columns/rows delimited by 0x1F and 0x1E
csv Comma-separated values
column Left-aligned columns. (See .width)
html HTML code
insert SQL insert statements for TABLE
line One value per line
list Values delimited by “|”
quote Escape answers as for SQL
tabs Tab-separated values
tcl TCL list elements
.nullvalue STRING Use STRING in place of NULL values
.once (-e|-x|FILE) Output for the next SQL command only to FILE
or invoke system text editor (-e) or spreadsheet (-x)
on the output.
.open ?OPTIONS? ?FILE? Close existing database and reopen FILE
The --new option starts with an empty file
.output ?FILE? Send output to FILE or stdout
.print STRING… Print literal STRING
.prompt MAIN CONTINUE Replace the standard prompts
.quit Exit this program
.read FILENAME Execute SQL in FILENAME
.restore ?DB? FILE Restore content of DB (default “main”) from FILE
.save FILE Write in-memory database into FILE
.scanstats on|off Turn sqlite3_stmt_scanstatus() metrics on or off
.schema ?PATTERN? Show the CREATE statements matching PATTERN
Add --indent for pretty-printing
.selftest ?–init? Run tests defined in the SELFTEST table
.separator COL ?ROW? Change the column separator and optionally the row
separator for both the output mode and .import
.session CMD … Create or control sessions
.sha3sum ?OPTIONS…? Compute a SHA3 hash of database content
.shell CMD ARGS… Run CMD ARGS… in a system shell
.show Show the current values for various settings
.stats ?on|off? Show stats or turn stats on or off
.system CMD ARGS… Run CMD ARGS… in a system shell
.tables ?TABLE? List names of tables
If TABLE specified, only list tables matching
LIKE pattern TABLE.
.testcase NAME Begin redirecting output to ‘testcase-out.txt’
.timeout MS Try opening locked tables for MS milliseconds
.timer on|off Turn SQL timer on or off
.trace FILE|off Output each SQL statement as it is run
.vfsinfo ?AUX? Information about the top-level VFS
.vfslist List all available VFSes
.vfsname ?AUX? Print the name of the VFS stack
.width NUM1 NUM2 … Set column widths for “column” mode
Negative values right-justify
sqlite>

But it looks like my node is running, working sending&receiving data. Only did not show these values in dashboard. So maybe this will not make it DQ??

This will not lead to disqualification, just stat is missing.
The sqlite3 should have version not older than v3.25.2
The weird thing, that your storage folder is not writeable.

Please, try with a docker version, not local (because installed version of sqlite3 is too old)
If you have a lot of free RAM, you can use the tmpfs instead of the disk for all operations (you need to have 2x amount of RAM for size of the db, see Used_serial.db malformed for example). But this command uses only disk with data:

sudo docker run --rm -it --mount type=bind,source=/home/usr/storj/storage/,destination=/storage sstc/sqlite3 sh
/data#

Then process from p.10

If storage is still not writeable, then please check your disk with fsck in a maintenance mode, seems something wrong with it.

OMG… :smiley: If i knew that last night, i would go and sleep instead of having that terrible stress. I even could live with that non working graph :slight_smile:

Will it help if i make ubuntu update/upgrade? Maybe then i will have the latest sqlite3 version?

I don’t think that folder is not writable as the node runs without any problems. As well i can manually create files and folders inside “storage”. I even created empty file dump_all.sql, thought it will help, but it did not.

Thank you @Alexey

20-09-16T13:59:38.148Z INFO piecestore download started {“Piece ID”: “CABA3VME34DLXLLNQU2XI3TLTSH6SKL34HZ6VLRDJ5ZZVLQIZKWA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:38.276Z INFO piecestore downloaded {“Piece ID”: “CABA3VME34DLXLLNQU2XI3TLTSH6SKL34HZ6VLRDJ5ZZVLQIZKWA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:44.676Z INFO piecestore download started {“Piece ID”: “UNVR7OAJ6K2755NQZEFTTEHLVZXX2I6CBWZ4RLP2AFXW55LRIFFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:50.922Z INFO piecestore downloaded {“Piece ID”: “UNVR7OAJ6K2755NQZEFTTEHLVZXX2I6CBWZ4RLP2AFXW55LRIFFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:51.656Z INFO piecestore download started {“Piece ID”: “ETHPCT3I4SASLIKXKYAWDY7A6WLK6FX2KQWFONCZCTIP2AS2SCPQ”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:52.248Z INFO piecestore download started {“Piece ID”: “EUONHNFDQPCQLG42S3UR3KVTJONGREIADOOHEJOQDFBL6VU4W5VQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T13:59:52.284Z INFO piecestore downloaded {“Piece ID”: “ETHPCT3I4SASLIKXKYAWDY7A6WLK6FX2KQWFONCZCTIP2AS2SCPQ”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:52.659Z INFO piecestore download started {“Piece ID”: “GPPMFNQKSRD2KEQDXYJEJ5MJOTEIHFOJMUHYC7ZYV726A3SELKQA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET”}
2020-09-16T13:59:52.967Z INFO piecestore downloaded {“Piece ID”: “EUONHNFDQPCQLG42S3UR3KVTJONGREIADOOHEJOQDFBL6VU4W5VQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T13:59:54.208Z INFO piecedeleter deleted {“Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Piece ID”: “R3FKUBYP7XX3FY2A5NUH5YWW2FS5UZVENFGARB2T6TVHDVBUUH4A”}
2020-09-16T13:59:56.969Z INFO piecestore download started {“Piece ID”: “6OF4O5KO6R72VS4RLSNCEEA2V674DXUTFO67WCOOQQUZWJFTLB2A”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:57.848Z INFO piecestore downloaded {“Piece ID”: “6OF4O5KO6R72VS4RLSNCEEA2V674DXUTFO67WCOOQQUZWJFTLB2A”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T13:59:59.517Z INFO piecestore download started {“Piece ID”: “ANDHPEXPLE3HEAKIZLVYOQ2Z6ISGVHX44MWRZ6R5OA5B7V5KH4UA”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T14:00:00.105Z INFO piecestore downloaded {“Piece ID”: “ANDHPEXPLE3HEAKIZLVYOQ2Z6ISGVHX44MWRZ6R5OA5B7V5KH4UA”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_REPAIR”}
2020-09-16T14:00:02.393Z INFO piecestore download started {“Piece ID”: “PUATCULP7AZ75ZVTRRDODQMFTVPCFVU7DYCUEFB2UOEGPSZMLPTQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_REPAIR”}
2020-09-16T14:00:02.639Z INFO piecestore downloaded {“Piece ID”: “PUATCULP7AZ75ZVTRRDODQMFTVPCFVU7DYCUEFB2UOEGPSZMLPTQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_REPAIR”}
2020-09-16T14:00:02.901Z INFO piecestore download started {“Piece ID”: “KVIUTZAOXDBGB336TBGYKTMPR5LXR76EMS2ERBA6CFVRPWDGOKLQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T14:00:03.614Z INFO piecestore downloaded {“Piece ID”: “KVIUTZAOXDBGB336TBGYKTMPR5LXR76EMS2ERBA6CFVRPWDGOKLQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T14:00:06.578Z INFO piecestore download started {“Piece ID”: “AZK2K7CWDAHJKNDOMSR2ONIN3LWMM6YHF3IAJVJZXOJ4FSZ2KZPA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T14:00:08.056Z INFO piecestore downloaded {“Piece ID”: “AZK2K7CWDAHJKNDOMSR2ONIN3LWMM6YHF3IAJVJZXOJ4FSZ2KZPA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-09-16T14:00:17.004Z INFO piecestore downloaded {“Piece ID”: “GPPMFNQKSRD2KEQDXYJEJ5MJOTEIHFOJMUHYC7ZYV726A3SELKQA”, “Satellite ID”: "121RTSDpyNZVcEU84Ticf2L1ntiu

I do not see any upload in your logs. So, I little worried about not writeable data folder from the docker container.
Can you create a file from the running sqlite container?

This node is full. So maybe it’s not accepting more data, that’s why?
But i will of course try to create file from sqilte container. Just i have no idea how to do that? :slight_smile:

Oh, if it’s full, then you need to mount another path to the container with enough space (2x size of repairing database).
So, your sqlite container will have two mounts - with storage and for temporary folder for unloading data.
Something like that:

sudo docker run --rm -it --mount type=bind,source=/home/usr/storj/storage/,destination=/data --mount type=bind,source=/tmp/,destination=/storage sstc/sqlite3 sh
/data# 

Then you will create an output like this

/data# sqlite3 bandwidth.db
sqlite> .mode insert
sqlite> .output /storage/dump_all.sql
sqlite> .dump
sqlite> .exit
/data# cat /storage/dump_all.sql | grep -v TRANSACTION | grep -v ROLLBACK | grep -v COMMIT >/storage/dump_all_notrans.sql
/data# rm bandwidth.db
/data# sqlite3 bandwidth.db ".read /storage/dump_all_notrans.sql"

As you can see, when you deal with databases, they are available in the /data folder (it’s a current one), the temporary directory is /storage.
The same way you can fix all databases

It’s full in terms of Storjnode, but it still have about 300GB of free space. While bandwidth.db size is only 19927040 so there should be a plenty of space for it.

If so, it should be available for writes from the container. Or something is really wrong either with disk or docker

this is RAID6. Yesterday, i’ve unmounted and run fsck as well e2fsck - no problems found.

Please, try this check:

sudo docker run --rm -it --mount type=bind,source=/home/usr/storj/storage/,destination=/storage sstc/sqlite3 sh
/data# echo test > /storage/test.txt
/data# cat /storage/test.txt
/data# rm /storage/test.txt
/data# exit

Thank you, it seem all OK

docker run --rm -it --mount type=bind,source=/home/usr/storj/storage/,destination=/storage sstc/sqlite3 sh
/data # echo test > /storage/test.txt
/data # cat /storage/test.txt
test
/data # rm /storage/test.txt
/data # exit
node@usr:~$

I even tried 2nd time. skipping rm, and i can see that file (test.txt) created inside storage folder.

Then .output in the sqlite should work too.
Ok. Perhaps database is unrecoverable. Then I would suggest to create an empty one:

  1. Stop the storagenode
  2. remove the bandwidth.db
  3. execute with sqlite3 container:
sqlite3 /storage/bandwidth.db
CREATE TABLE bandwidth_usage (
                                                satellite_id  BLOB    NOT NULL,
                                                action        INTEGER NOT NULL,
                                                amount        BIGINT  NOT NULL,
                                                created_at    TIMESTAMP NOT NULL
                                        );
CREATE INDEX idx_bandwidth_usage_satellite ON bandwidth_usage(satellite_id);
CREATE INDEX idx_bandwidth_usage_created   ON bandwidth_usage(created_at);
CREATE TABLE bandwidth_usage_rollups (
                                                                                interval_start  TIMESTAMP NOT NULL,
                                                                                satellite_id    BLOB    NOT NULL,
                                                                                action          INTEGER NOT NULL,
                                                                                amount          BIGINT  NOT NULL,
                                                                                PRIMARY KEY ( interval_start, satellite_id, action )
                                                                        );
.exit
  1. Start the storagenode
  2. Check your dashboard

Alexey one more question.
After i open sqlite3 should i just copy -> paste this as single script or should i paste each line separately?

" ```
CREATE TABLE bandwidth_usage (
satellite_id BLOB NOT NULL,
action INTEGER NOT NULL,
amount BIGINT NOT NULL,
created_at TIMESTAMP NOT NULL
);
CREATE INDEX idx_bandwidth_usage_satellite ON bandwidth_usage(satellite_id);
CREATE INDEX idx_bandwidth_usage_created ON bandwidth_usage(created_at);
CREATE TABLE bandwidth_usage_rollups (
interval_start TIMESTAMP NOT NULL,
satellite_id BLOB NOT NULL,
action INTEGER NOT NULL,
amount BIGINT NOT NULL,
PRIMARY KEY ( interval_start, satellite_id, action )
);

CREATE TABLE bandwidth_usage (
                                                satellite_id  BLOB    NOT NULL,
                                                action        INTEGER NOT NULL,
                                                amount        BIGINT  NOT NULL,
                                                created_at    TIMESTAMP NOT NULL
                                        );
CREATE INDEX idx_bandwidth_usage_satellite ON bandwidth_usage(satellite_id);
CREATE INDEX idx_bandwidth_usage_created   ON bandwidth_usage(created_at);
CREATE TABLE bandwidth_usage_rollups (
                                                                                interval_start  TIMESTAMP NOT NULL,
                                                                                satellite_id    BLOB    NOT NULL,
                                                                                action          INTEGER NOT NULL,
                                                                                amount          BIGINT  NOT NULL,
                                                                                PRIMARY KEY ( interval_start, satellite_id, action )
                                                                        );

Just copy (there is a little copy button on the code block in the upper right corner) and paste when you see a sqlite> prompt.

1 Like

Thank you, but something did not worked out:

sudo sqlite3
SQLite version 3.22.0 2018-01-22 18:45:57
Enter “.help” for usage hints.
Connected to a transient in-memory database.
Use “.open FILENAME” to reopen on a persistent database.
sqlite> sqlite3 /storage/bandwidth.db
…> CREATE TABLE bandwidth_usage (
…> satellite_id BLOB NOT NULL,
…> action INTEGER NOT NULL,
…> amount BIGINT NOT NULL,
…> created_at TIMESTAMP NOT NULL
…> );
Error: near “sqlite3”: syntax error
sqlite> CREATE INDEX idx_bandwidth_usage_satellite ON bandwidth_usage(satellite_id);
Error: no such table: main.bandwidth_usage
sqlite> CREATE INDEX idx_bandwidth_usage_created ON bandwidth_usage(created_at);
Error: no such table: main.bandwidth_usage
sqlite> CREATE TABLE bandwidth_usage_rollups (
…> interval_start TIMESTAMP NOT NULL,
…> satellite_id BLOB NOT NULL,
…> action INTEGER NOT NULL,
…> amount BIGINT NOT NULL,
…> PRIMARY KEY ( interval_start, satellite_id, action )
…> );
sqlite> .exit

I’ve managed another way to deal with it.
I’ve just navigated to the storage folder, and run all commands from the manual without /storage/. Looks like everyting is worked out, now i’m at this point:

sqlite3 bandwidthNEW.db “.read dump_all_notrans.sql”

It looks like it will take long time, as the original db ±19Gb. Currenlty i see the new file growing up but after 30min. of work it only have now ±1Gb of size.
I don’t want my node to be offline for next ±8hours. as it looks like it will take about that long to recreated db.

If my current bandwith.db is corrupted, probably storagenode can’t use it propperly. Temporary two other DB’s “dump_all.sql” and “dump_all_notrans.sql” are not in use by storagenode. I’ve decided to leave runing node with corrupted bandwitdh.db while in the background i’m creating new db bandwidhNEW.db. Once it will be finished i’m about to stop the node, rename new db to original name. And as i understand i will loose some stats data (that is currenlty generated), but i will have new and correctly working db. The main question will it work that way? :slight_smile:

@Alexey thank you !!!

I did it !!

looks like its working now.
Somehow instead of 19Gb it’s now 18Gb, but it’s working :slight_smile:

Have a nice weekend !

1 Like