Got it! I woudl then separate two things – freeing up 16TB disk and migrating the node.
Free up 16TB Disk:
Shrink your 16TB NTFS drive (Node keeps running)
Shut down node, and clone entire partition to 12TB disk using disk cloning software, like CloneZilla. This shall go at max sequential speed, 150-200MB/sec, unlike copying by files, so 10 TB worth of node will take about 16 hours. Node is obviously offline during this time.
Start the node from 12TB NTFS disk.
Now node is running, and you have a spare 16TB disk.
Migration to trunas
Install the disk to truenas, create a zfs pool and dataset, and enable rsync.
Start rsync passes from your node machine to truenas. It’s better to use rsync, but if you can’t – robocopy over NFS shall do too.
Keep repeating rsync passes until the time each next pass takes stops decreasing.
Shutdown the node.
Do the last full sync pass (with --delete flag) (this shall take under 10 min)
Good point.
I reduced allocation to 12TB, and although there is only about 8TB data on the disk, the storj dashboard reports >2TB overuse (dashboard reports several TB of trash, though I don’t see that on the drive, odd).
So ingress is effectively zero at the moment.
There is no need to speed up anything. Node is online throughout the process. “Speeding up copy” is not a good enough reason to migrate to misguided and high risk solution. Hashstore is not appropriate for home users. Let filesystem do its job — metadata access can be offloaded. Messing with logs — cannot. Hashstore improves one corner case at the expense of everything else.
And by the way migration to hashstore will take more io than copying data off device. So this is not only bad advice by also counterproductive.
I messed something up, though I do not know what and how.
After cloning the disk (I gave up on the file copy with the node running, it was too slow), I get this error when I try to start my node.
Is there any way I can do something to get it to work again?
2025-07-18T06:47:40+02:00
ERROR
failure during run
{error: Error during preflight check for storagenode databases: preflight: database \heldamount: failed create test_table: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:487\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:421\n\tmain.cmdRun:115\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130, errorVerbose: Error during preflight check for storagenode databases: preflight: database \heldamount: failed create test_table: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:487\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:421\n\tmain.cmdRun:115\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130\n\tmain.cmdRun:117\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130}
2025-07-18T06:47:40+02:00
FATAL
Unrecoverable error
{error: Error during preflight check for storagenode databases: preflight: database \heldamount: failed create test_table: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:487\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:421\n\tmain.cmdRun:115\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130, errorVerbose: Error during preflight check for storagenode databases: preflight: database \heldamount: failed create test_table: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:487\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:421\n\tmain.cmdRun:115\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130\n\tmain.cmdRun:117\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130}
However, if you run the node on the NTFS disk under Linux, it will always corrupt something, NTFS is not fully supported under Linux, so you need to migrate to ZFS anyway. And the only way - is a file copy with rsync/rclone.
I started the cloned NTFS disk as a node on my Windows machine. Nothing was changed, I just cloned the disk and started the node. But, for some unknown reason both the original disk and the clone, give me this error message.
Please, do not run them in parallel, it will be disqualified for lost data pretty fast. And you now need to copy the missing data.
The DB can be corrupted just because it wasn’t clean shutdown, or the filesystem was corrupted. You need to check the filesystem, fix all errors, then check all databases and fix the corrupted ones or recreate them using this guide: How to fix database: file is not a database error - Storj Docs
I did not run the in parallel. Both disks where attached to the same Windows machine, and I simply edited the config to the specific disk.
Trying to repair the .db files was unsuccessful, so I ended up deleting the .db files and then I could start the node.
This is exactly what I mean. If you get some data when you use a disk1, then switched to a disk2, it will miss the data uploaded to the disk1. The same will happen when you switch back to disk1. Now both disks have some missing data, and any of them now will fail audits. Since it’s the same identity - it could be disqualified, depending on how much data is missing. Now you need to copy a missing data in blobs from disk1 to disk2 and vice versa to not be disqualified. And do not perform any swaps after you run with one of them.
By the way, with a hashstore backend it’s not possible so easy copy the missing data in case of double swap. This will be a one way ticket.
i got to admit… i try robo copy on a 8TB disk into truenas scale with 4 x 16tb with read write cache too… it takes me almost 7 days to finish copy 1st round … i dont think thats possible… why yours only take 24 - 26 hours … weird
A regular piecestore node will take significantly longer to copy over than a hashstore node, due to the latter having much fewer and much larger files, resulting in less file system overhead.
With that being said, as @arrogantrabbit have tried to say many times, copy times does not really matter - it can be done when the node is online