Please, stop docker stop -t 300 storagenode and remove the container docker rm storagenode, then run it again. Please, do not exec the dashboard, just look into logs.
SN host is Ubuntu.
SMB host is FreeBSD (FreeNAS, which is finally supported thankfully).
Incompatibility? Again? I’ve had issues with NFS and that was all over the Internet, regarding SQL. But for SMB I have not found such issues. iSCSI is a gigantic hassle and inflexible, I would avoid it if possible.
Based on previous experience it doesn’t look like incompatibility. When I had NFS issues there was nothing in the log - nothing about database locked, just simply uploads piling up and timing out.
I’ve had database locked messages each time when starting the container - the node did some heavy activity on the storage for 2-3 hours and allowed downloads but uploads were canceled. Then the database got unlocked, activity stopped and uploads worked normally.
The description for any network connected drive because of high latency.
However, each of high level protocol such as SMB or NFS have had own locking methods and do not suitable for database processing. They may work, but not guarantee.
Seems you are unlucky and now you meet a different problem with it.
Try to connect this storage directly or via iSCSI to finish migration, maybe it could work after via SMB, but better to avoid such usages, the latency likely killing your ability to compete with other nodes
There is basically no latency as a result of network share, nearly all latency is drive seek time, because I’m using hard drives - and mostly everyone has these. But if there was latency issue, then iSCSI would have it as well. A node like this not working because of a too slow hard drive would be unheard of…
It cannot be connected directly. I can try to move it to iSCSI but I have a feeling it will be a gigantic waste of time.
Can you give me any checks to do regarding the database before I do anything as lengthy as trying to migrate to iSCSI?
unfortunately only start the storagenode.
The latency should not be there from the logic, but it’s here from the practice.
iSCSI works much better than NFS or SMB, you even notice the significance in the acceptance ratio as other users who switched to ISCSI.
After automatic updating, one node became in such an error
After a while, the second node was updated and now it is in the same error.
Prior to the upgrade, everything worked perfectly.
What should I do ?
Same error here.
Running on Linux (Latest Ubuntu)
2019-10-04T19:21:52.473Z FATAL Unrecoverable error {“error”: “Error creating tables for master database on storagenode: migrate: storage node database error: migrate tables:: database is locked\n\tstorj.io/storj/internal/dbutil/sqliteutil.backupDBs:50\n\tstorj.io/storj/internal/dbutil/sqliteutil.MigrateTablesToDatabase:27\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).migrateToDB:361\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func4:805\n\tstorj.io/storj/internal/migrate.Func.Run:219\n\tstorj.io/storj/internal/migrate.(*Migration).Run:132\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:255\n\tmain.cmdRun:178\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203”, “errorVerbose”: “Error creating tables for master database on storagenode: migrate: storage node database error: migrate tables:: database is locked\n\tstorj.io/storj/internal/dbutil/sqliteutil.backupDBs:50\n\tstorj.io/storj/internal/dbutil/sqliteutil.MigrateTablesToDatabase:27\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).migrateToDB:361\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func4:805\n\tstorj.io/storj/internal/migrate.Func.Run:219\n\tstorj.io/storj/internal/migrate.(*Migration).Run:132\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:255\n\tmain.cmdRun:178\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203\n\tmain.cmdRun:180\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203”}
I updated another node without error.
I did this to troubleshoot:
Remove container and create again (no result)
Reboot the PC, restarted container (no result)
I can try
GO TO 1.
… but I’m not sure that will help.
Edit 2:
I did that. It is still running:
2019-10-04T19:26:00.591Z INFO Configuration loaded from: /app/config/config.yaml
2019-10-04T19:26:00.617Z INFO Operator email: mymail@mydomain.com
2019-10-04T19:26:00.617Z INFO operator wallet: 0x6986yodaghla9698608dad087d0F
2019-10-04T19:26:01.205Z INFO version running on version v0.22.1
2019-10-04T19:26:01.286Z INFO db.migration.23 Split into multiple sqlite databases
EDIT 3:
THE ERROR IS BACK.
So… this node is toast. Too bad.
I’m getting tired of updates that break things. Just when my nodes are running and stable, updates throw sand in the machine. UGh.
EDIT 4
It retries and throws the same error every 12-15 seconds.
EDIT 5
The storage is on a NAS. So I did:
Update NAS
Reboot NAS
Update Linux (latest Kernel)
Reboot server
Recreate storagenode
RESULT:::::
2019-10-04T19:50:01.295Z FATAL Unrecoverable error {“error”: “Error creating tables for master database on storagenode: migrate: storage node datab ase error: migrate tables:: database is locked\n\tstorj.io/storj/internal/dbutil /sqliteutil.backupDBs:50\n\tstorj.io/storj/internal/dbutil/sqliteutil.MigrateTab lesToDatabase:27\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).migrateToDB:3 61\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func4:805\n\tstor j.io/storj/internal/migrate.Func.Run:219\n\tstorj.io/storj/internal/migrate.(*Mi gration).Run:132\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables: 255\n\tmain.cmdRun:178\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstor j.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).ex ecute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/ cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main: 349\n\truntime.main:203”, “errorVerbose”: “Error creating tables for master data base on storagenode: migrate: storage node database error: migrate tables:: data base is locked\n\tstorj.io/storj/internal/dbutil/sqliteutil.backupDBs:50\n\tstor j.io/storj/internal/dbutil/sqliteutil.MigrateTablesToDatabase:27\n\tstorj.io/sto rj/storagenode/storagenodedb.(*DB).migrateToDB:361\n\tstorj.io/storj/storagenode /storagenodedb.(*DB).Migration.func4:805\n\tstorj.io/storj/internal/migrate.Func .Run:219\n\tstorj.io/storj/internal/migrate.(*Migration).Run:132\n\tstorj.io/sto rj/storagenode/storagenodedb.(*DB).CreateTables:255\n\tmain.cmdRun:178\n\tstorj. io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.f unc1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/co bra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\t storj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203\n\tmain. cmdRun:180\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/p kg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\ Loading...\n\tgithub.com/spf13/cobra.(*Comm and).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\trunti me.main:203”}
EDIT 6
See also this topic:
EDIT 7
I never had problems like this before on this machine, the latest update seems to cause it.
Network storage does not work anymore. Neither SMB, nor NFS.
SMB could work in rare cases.
You should use the iSCSI or local connection (PCI, SATA, USB).
wow! a tremendous change! Maybe more than one will be left out I will see how I change the configuration, you could give a more descriptive error message … thanks for the information