FATAL Unrecoverable error {“error”: “Error creating tables for master database on storagenode: migrate: storage node database error: migrate tables:: database is locked

,

There should be a native binary/package for supported NAS.
The storagenode is intended to run directly on your device

I have the same issue. I’ve been using a NetApp over NFS since the beginning, even if not supported, since this is the only way to be sure to have a 100% safe storage system, and now it’s dead. i’m not sure I understand why you are blocking network share since all serious storage system will be using network storage…

Just databases do not like to work on network connected storage. Seems the lock method does not supported by NFS and SMB.
The iSCSI do not have such issues

I could understand for high usage local DB, but we are talking of storage over internet here.

I don’t mind having the DB on a local storage and the data on a network storage if this is the way to go. Is there a way to split the two storage mount point?

I had this issue show up tonight when I changed my node from alpha to beta and rebuilt it.
I was able to resolve by changing back to alpha and rebuilding.
Is it possible that there is still a difference between alpha and beta even though they are supposed to be the same?
I am on 22.1 alpha channel.

Made an iscsi target on the Synology NAS. Mounted it on a different mount point.

See https://www.synology.com/nl-nl/knowledgebase/DSM/tutorial/Virtualization/How_to_set_up_and_use_iSCSI_target_on_Linux (Dutch)

Copied (actually: moved) all files to it. This took all night. Some files could not be moved or even read (AS ROOT!). There is something wrong with the smb implementation of Synology.

I chmodded 777 the directory, no luck.

I ended up changing permissions to EVERYONE read/write/administer. (In the Synology web interface.) Then it would move the rest of the files.

Restarted storagenode.

2019-10-05T08:34:09.230Z INFO Configuration loaded from: /app/config/config.yaml
2019-10-05T08:34:09.256Z INFO Operator email: mymail@mail.com
2019-10-05T08:34:09.256Z INFO operator wallet: 0x000000000000000000000000000000F
2019-10-05T08:34:09.840Z INFO version running on version v0.22.1
2019-10-05T08:34:09.856Z INFO db.migration Database Version {“version”: 25}
2019-10-05T08:34:09.857Z INFO bandwidth Performing bandwidth usage rollups
2019-10-05T08:34:09.860Z INFO Node xyz08760870790870780978 started
2019-10-05T08:34:09.860Z INFO Public server started on [::]:28967
2019-10-05T08:34:09.860Z INFO Private server started on 127.0.0.1:7778
2019-10-05T08:34:09.861Z INFO contact:chore Storagenode contact chore starting up
2019-10-05T08:34:09.892Z INFO piecestore:monitor Remaining Bandwidth {“bytes”: 9898422371328}
2019-10-05T08:34:10.003Z INFO version running on version v0.22.1
2019-10-05T08:35:52.522Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}

No other lines.
I’m afraid I got disqualified… (AGAIN… now a different node)

Out of 3 nodes, 2 have been disqualified, one because of the ending support of smb (this one, Ubuntu), one because of Windows updates.

EDIT!
It started working after a while! sigh

2019-10-05T08:57:42.120Z INFO piecestore deleted {“Piece ID”: “533BLS7KGG3EPABNINWLWGO5HLSASM4W5WE2YHK5DNJAMOXIX76Q”}
2019-10-05T08:57:43.323Z INFO piecestore downloaded {“Piece ID”: “D4OGM5E2MKE2FL2FN4KKHY5Q4ATS4H7P4E3RHHW2IYQ7PARTKC2Q”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET”}

There should not be a difference. What do you mean by “rebuild”?

I guess he means removing the container and entering the docker command again (the long one containing the Eth address etc.)

Yes, exactly. I have a .sh file to build the docker file. I changed from alpha to beta, this issue manifested. I changed back to alpha, fixed the problem.

Could you please try to switch it back to the beta?

There is a problem during database file creation with SMB. Once they are created, it works with SMB.
I’ve tried changing docker tags but to no avail.

@Alexey Any expected release dates for the native FreeNAS package?

No. But you can follow the roadmap: https://storjlabs.aha.io/published/01ee405b4bd8d14208c5256d70d73a38?page=4

Just tried, same result. Beta crashes :
2019-10-06T22:21:21.760Z FATAL Unrecoverable error {“error”: “bandwidthdb error: no such table: bandwidth_usage_rollups”, “errorVerbose”: "bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.
Alpha runs fine.

I fixed my issue by stopping the node and removing the docker image. Copied the storage folder that was on a nfs mount locally (I used /storj), except for the blobs directory since it’s too big. Entered the docker command with the new storage path. This updated the DB and created the new files. I then stopped and removed the docker image. Copied all files back to my nfs mount. And recreated the docker image with the nfs mount like I was running before, and everything is working now.

I tried to keep the DB locally in /storj/storage/ and use a symlink inside this for the blobs directory, but then, the node complained that I didn’t have enough storage space on /

That doesn’t look like it worked. From the logs:
2019-10-07T19:04:57.602Z INFO Configuration loaded from: /app/config/config.yaml
2019-10-07T19:04:57.615Z INFO Operator email: email@gmail.com
2019-10-07T19:04:57.615Z INFO operator wallet: [MYWALLET]
2019-10-07T19:04:57.801Z INFO version running on version v0.22.1
2019-10-07T19:04:57.821Z INFO db.migration.23 Split into multiple sqlite databases
2019-10-07T19:05:07.898Z FATAL Unrecoverable error {“error”: “Error creating tables for master database on storagenode: migrate: storage node database error: migrate tables:: database is locked\n\tstorj.io/storj/internal/dbutil/sqliteutil.backupDBs:50\n\tstorj.io/storj/internal/dbutil/sqliteutil.MigrateTablesToDatabase:27\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).migrateToDB:361\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func4:805\n\tstorj.io/storj/internal/migrate.Func.Run:219\n\tstorj.io/storj/internal/migrate.(*Migration).Run:132\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:255\n\tmain.cmdRun:178\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203”, “errorVerbose”: “Error creating tables for master database on storagenode: migrate: storage node database error: migrate tables:: database is locked\n\tstorj.io/storj/internal/dbutil/sqliteutil.backupDBs:50\n\tstorj.io/storj/internal/dbutil/sqliteutil.MigrateTablesToDatabase:27\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).migrateToDB:361\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func4:805\n\tstorj.io/storj/internal/migrate.Func.Run:219\n\tstorj.io/storj/internal/migrate.(*Migration).Run:132\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:255\n\tmain.cmdRun:178\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203\n\tmain.cmdRun:180\n\tstorj.io/storj/pkg/process.cleanup.func1.2:275\n\tstorj.io/storj/pkg/process.cleanup.func1:293\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:74\n\tmain.main:349\n\truntime.main:203”}

Unfortunately the network connected storage will not work, neither SMB, nor NFS. Such setups does not supported.
I moved your reply to the current thread with the similar problem.

Please, consider to use a virtual disks, connected to your Linux VM for data or use iSCSI

3 posts were split to a new topic: Piece space used error: no such table: piece_space_used

I am indeed following that roadmap, but the last (in development/scheduled) version on there has consistently been 0-2 versions behind compared to the actual released version.

wow thats tragic. it even works with a usb connected drive, but nfs is an issue. crazy

Ye, there’s some technical issue with creating a database or something, at least for SMB, NFS seems to have more issues, in my experience. It is working during status quo, but during updates some things do not work.

I assume if it’s not Storj’s fault that there is a bug inside the network storage server/client or the protocol.