Pieceinfo.db is not a database

I hit the subject message while attempting to restart my node. After reading the instructions found here, I attempted to restart the node and now I get the following:

2022-07-20T03:03:06.226352144Z 2022-07-20 03:03:06,225 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-07-20T03:03:06.237853573Z 2022-07-20 03:03:06,235 INFO spawned: 'storagenode' with pid 47
2022-07-20T03:03:06.239348832Z 2022-07-20 03:03:06,238 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-07-20T03:03:06.520026244Z 2022-07-20T03:03:06.518Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2022-07-20T03:03:06.524856615Z 2022-07-20T03:03:06.522Z	INFO	Operator email	{"Process": "storagenode", "Address": "galewis@yaddatech.com"}
2022-07-20T03:03:06.526995854Z 2022-07-20T03:03:06.523Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0x5a6F11Ab07A974463Ad0A839B004b40588e9B5Ec"}
2022-07-20T03:03:06.947141297Z 2022-07-20T03:03:06.945Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1SdoZv6L8E32agbFGRbrJt97TYwLek8kLBzuWNLMx1nAqKHYiW"}
2022-07-20T03:03:07.095021778Z 2022-07-20T03:03:07.093Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 53}
2022-07-20T03:03:08.098332614Z 2022-07-20 03:03:08,097 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-07-20T03:03:08.220368641Z Error: Error during preflight check for storagenode databases: preflight: database "pieceinfo": expected schema does not match actual:   &dbschema.Schema{
2022-07-20T03:03:08.220856824Z - 	Tables: []*dbschema.Table{
2022-07-20T03:03:08.220915243Z - 		(
2022-07-20T03:03:08.220941369Z - 			s"""
2022-07-20T03:03:08.220967328Z - 			Name: pieceinfo_
2022-07-20T03:03:08.220992704Z - 			Columns:
2022-07-20T03:03:08.221017538Z - 				Name: deletion_failed_at
2022-07-20T03:03:08.221042664Z - 				Type: TIMESTAMP
2022-07-20T03:03:08.221067956Z - 				Nullable: true
2022-07-20T03:03:08.221092916Z - 				Default: ""
2022-07-20T03:03:08.221118375Z - 				Reference: nil
2022-07-20T03:03:08.221143542Z - 				Name: order_limit
2022-07-20T03:03:08.221168626Z - 				Type: BLOB
2022-07-20T03:03:08.221193669Z - 				Nullable: false
2022-07-20T03:03:08.221218586Z - 				Default: ""
2022-07-20T03:03:08.221244129Z - 				Reference: nil
2022-07-20T03:03:08.221269005Z - 				Name: piece_creation
2022-07-20T03:03:08.221293922Z - 				Type: TIMESTAMP
2022-07-20T03:03:08.221318673Z - 				Nullable: false
2022-07-20T03:03:08.221343674Z - 			... // 37 elided lines
2022-07-20T03:03:08.221368258Z - 			s"""
2022-07-20T03:03:08.221392967Z - 		),
2022-07-20T03:03:08.221417135Z - 	},
2022-07-20T03:03:08.221441386Z + 	Tables: nil,
2022-07-20T03:03:08.221465595Z - 	Indexes: []*dbschema.Index{
2022-07-20T03:03:08.221490262Z - 		s`Index<Table: pieceinfo_, Name: idx_pieceinfo__expiration, Columns: piece_expiration, Unique: false, Partial: "piece_expiration I`...,
2022-07-20T03:03:08.221650393Z - 		s`Index<Table: pieceinfo_, Name: pk_pieceinfo_, Columns: satellite_id piece_id, Unique: true, Partial: "">`,
2022-07-20T03:03:08.221679144Z - 	},
2022-07-20T03:03:08.221703644Z + 	Indexes:   nil,
2022-07-20T03:03:08.221728312Z   	Sequences: nil,
2022-07-20T03:03:08.221753063Z   }
2022-07-20T03:03:08.221776939Z 
2022-07-20T03:03:08.221800481Z 	storj.io/storj/storagenode/storagenodedb.(*DB).preflight:406
2022-07-20T03:03:08.221825065Z 	storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:353
2022-07-20T03:03:08.221850024Z 	main.cmdRun:241
2022-07-20T03:03:08.221877650Z 	storj.io/private/process.cleanup.func1.4:372
2022-07-20T03:03:08.221902734Z 	storj.io/private/process.cleanup.func1:390
2022-07-20T03:03:08.221927235Z 	github.com/spf13/cobra.(*Command).execute:852
2022-07-20T03:03:08.221951611Z 	github.com/spf13/cobra.(*Command).ExecuteC:960
2022-07-20T03:03:08.221976029Z 	github.com/spf13/cobra.(*Command).Execute:897
2022-07-20T03:03:08.222000613Z 	storj.io/private/process.ExecWithCustomConfigAndLogger:93
2022-07-20T03:03:08.222025239Z 	main.main:479
2022-07-20T03:03:08.222049239Z 	runtime.main:255
2022-07-20T03:03:08.234150022Z 2022-07-20 03:03:08,233 INFO exited: storagenode (exit status 1; not expected)
2022-07-20T03:03:09.248765072Z 2022-07-20 03:03:09,246 INFO spawned: 'storagenode' with pid 56
2022-07-20T03:03:09.250357209Z 2022-07-20 03:03:09,249 WARN received SIGQUIT indicating exit request
2022-07-20T03:03:09.253381894Z 2022-07-20 03:03:09,251 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2022-07-20T03:03:09.254971781Z 2022-07-20T03:03:09.252Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2022-07-20T03:03:09.266795304Z 2022-07-20 03:03:09,265 INFO stopped: storagenode-updater (exit status 0)
2022-07-20T03:03:09.274084300Z 2022-07-20 03:03:09,273 INFO stopped: storagenode (terminated by SIGTERM)
2022-07-20T03:03:09.277699588Z 2022-07-20 03:03:09,276 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

Not sure what this means or how to fix it.

I think i may have had the same. Can’t seem to find the topic tho.
I stopped the node then deleted all databases and the orders files.

This got it working but opinion was that my fix was too harsh

Hi @thebadcat
I think the correct article for you now is this one

1 Like

I’ve moved my *.db files and attempted to restart and I’m hitting this error now:

2022-07-20T13:32:15.249951689Z 2022-07-20T13:32:15.245Z	INFO	Public server started on [::]:28967	{"Process": "storagenode"}
2022-07-20T13:32:15.249994233Z 2022-07-20T13:32:15.245Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2022-07-20T13:32:15.600015521Z 2022-07-20T13:32:15.595Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "4h11m54.52070686s"}
2022-07-20T13:32:15.603304048Z 2022-07-20T13:32:15.597Z	WARN	piecestore:monitor	Disk space is less than requested. Allocated space is	{"Process": "storagenode", "bytes": 418850607104}
2022-07-20T13:32:15.603672394Z 2022-07-20T13:32:15.598Z	ERROR	piecestore:monitor	Total disk space is less than required minimum	{"Process": "storagenode", "bytes": 500000000000}

This is my run command:

docker run -d --restart unless-stopped --stop-timeout 300 -p 28968:28967/tcp -p 28968:28967/udp -p 14002:14002 -e WALLET=“0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” -e EMAIL="xxxxxxx@xxxxxxxx.com" -e ADDRESS=“storj.xxxxxxxx.com:28968” -e STORAGE=“1.6TB” --mount type=bind,source=“/home/storj/.local/share/storj/identity/storagenode”,destination=/app/identity --mount type=bind,source=“/storj/fs0”,destination=/app/config --name storagenode storjlabs/storagenode:latest’

And this is my /storj/fs0:

storj@storj-n1:~$ df /storj/fs0
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sda1      1922728840 1416002984 409033796  78% /storj/fs0

All that space was/is being used for my STORJ space.

This is not a new node… it has been around for a few years so I’m not sure why it’s acting up now.

Since pieceinfo.db is empty now, the node cannot recognize the used space and check only free space, which is less than required minimum of 500GB.
You can temporary disable this check by reducing value for storage2.monitor.minimum-disk-space option in the config.yaml file or specify it as a command line option after the image name in your docker run command, i.e. --storage2.monitor.minimum-disk-space=400GB
It should start and run a filewalker, it will fill up the database and then storagenode should recognize its used space.

ok, thx Alexey,

That works… I’m back online… now here is the reason that I was recycling my node in the first place. I noticed this in my stats:

image

And I verified on the command line with the following:

Storage Node Dashboard ( Node Version: v1.58.2 )

======================

ID     1SdoZv6L8E32agbFGRbrJt97TYwLek8kLBzuWNLMx1nAqKHYiW
Status ONLINE
Uptime 4m39s

                   Available          Used        Egress       Ingress
     Bandwidth           N/A     253.98 GB     116.41 GB     137.56 GB (since Jul 1)
          Disk       1.50 TB     101.83 GB
Internal 127.0.0.1:7778
External storj.yaddatech.com:28968

The data says I have 1.5 TB available and 101.8GB used but I think it should be the other way around. Again this is a df of the disk:

root@storj-n1:/storj/fs0/storage# df /storj/fs0/
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sda1      1922728840 1414479612 410557168  78% /storj/fs0

This node has been running for almost a year so those number just does not add up and STORJ data is the only thing on this disk.

If the path to data is correct (you doesn’t have a second copy of blobs folder anywhere on that disk), then you need to wait while filewalker will scan for all pieces.

ok, thx…

btw, how long should this filewalker take since there is somewhere around 1.5G of data?

Depends on type of your disk. If disk is slow (SMR or overloaded with other tasks), it could take days. And I’m sure you mean 1.5TB not 1.5GB.

Did you check your storage location? Is it correct?

I observe that the file walker takes around 12 minutes per terabyte of data, on a healthy non-SMR drive and ext4 defaults.

A good approximation for testing time of the file walker process is to run any tool to measure disk space used by the storage directory, e.g. du. It will roughly perform the same disk operations.

There is a suggestion to add some log entries that would indicate when the file walker process starts and ends. But so far, this is all just approximate.

1 Like

It would be awesome to get that information from the API. So we could write scripts for a staged startup if we run multiple nodes on the same pool.
Maybe it‘s also possible to put that indication that filewalker currently runs on the node dashboard?

Thx Alexey… things looks normal again this morning after another recycling of the node… guess it just took time:

image

1 Like

@Toyoo I’ll give du a try if this happens again… looks like all is normal again and I don’t want to do anything that could upset things again.