I am looking to move my node from Windows to a pi. I am able to move the storage drive physically from the Win to Linux device. If I install storj in docker, and just point it at the moved, mounted ntfs drive (which also contains my certificate dir), will it be as simple as that?
I seen some issue with NTFS formatted drives so beware when migrating to the PI, You may have issues later.
You can just use docker to connect to the new drive. Its pretty simple when your just moving the entire hard drive to a new system.
So it’s a usb nas. Not on a network. The physical drive would be ntfs either way. Unless I am missing something with the docker connection?
So option B would be to mirror the drive onto an ext4 formatted drive?
Anything NTFS on linux doesnt do well because its not native so if you can Id move everything to a formatted EXT4 drive if possible, If you ran NTFS on linux it may lead to DQ pretty quickly ive learned from experience from other people that I know who ran NTFS drives on a PI.
Yes, it’s simple like that, but there are few gotchas:
- if you migrating from Windows GUI - move data to subfolder “storage”;
- the NTFS is not native for Linux, it’s working slower in a few times and will have corruptions during the work, especially if NTFS was used on Windows 10 - MS uses special features such as compression and deduplication by default, which have a limited support under Linux without special tuning, as result - you may lose data because of incompatibility.
At the moment I have only negative experience working with NTFS on Linux: Moving from Windows to Ubuntu and back
Thanks. What about mirroring the content to another drive formatted for Linux on windows via robocopy. Of course there would be a delta at the end with storj stopped before moving? Any potential issues there?
Here is what I am thinking. For context, I am currently storing about .8tb of content.
- robocopy current drive to 1tb exfat
- move current ntfs drive to Pi
- format ntfs drive to ext4
- copy 1tb exfat to ext4 on pi
- mnt ext4 drive
- install storj in docker
- run storj pointing cert and storage folders accordingly
Yeah thats exactly what can work.
Will the config.yaml file work if I update the paths there? Then it would just be running storage node and letting the configs come from the file?
With docker you dont really use the config paths you set the paths in docker when you start the nodes.
How can I do Zksync then? I am very green to docker I should mention.
Either in the
config.yaml: Configuring zkSync Payments - Node Operator or as a command-line option
--operator.wallet-features=zksync after the name of the image,
docker run ... storjlabs/storagenode:latest --operator.wallet-features=zksync
Ok this is exactly where I am confused then. Do I need to specify any options as part of the docker run command if they are also in the config?
You can still edit the config file to add your wallet you dont need that in your docker start just add the zksync wallet to the config.
The docker version takes all required parameters from the command line, like ADDRESS, WALLET, STORAGE, EMAIL, mounts local folders to the folders inside the container.
All additional options can be specified either after image name (you can see available options with
docker exec -it storagenode ./storagenode setup --help, or specified in the
config.yaml, for example - wallet options or path to logs.
Parameters from the command line have a precedence above options from the
config.yaml, so the last one is almost empty in the docker version.
Alright… is exFat safe to use for my drive? That way I can stage everything on Windows first, and just move it to the pi without much downtime.
Parameters from the command line have a precedence above options from the config.yaml
, so the last one is almost empty in the docker version.
You mean the config.yaml is almost empty? This makes sense now
You can use Exfat but not sure I would recommend it.
No dont get it confused its not empty its just commented out to not use it. Things in the config can be uncommented and changed if you need to. Other wise everything that is important can be in the docker start command.
I would google what is the most recommend formated drive for linux It can give you pros and cons of what is good and what is very bad.
Two downsides of exFAT:
the default allocation unit is really big - 128K. So if you create a 1-byte file, it takes 128K of disk space, vs 4K for most filesystems. I don’t know if exFAT reads and writes the whole 128K on every I/O, but if it does, I/O performance will be slower for small files than most filesystems. The average (remote) segment size on us1.storj.io is 933814138306560/153303872 = 6091262.57 bytes, so divided into 29 pieces (I think?) gives around 210043 bytes per piece, maybe plus some small overhead for piece metadata. Since exFAT stores in chunks of 131072 bytes, you’d need 2 for the average piece, or 262144 bytes, wasting 52000 bytes on average for each piece. With a filesystem like ext4 that has a 4K allocation size, you’d only waste 1950 bytes. On average, a filesystem will waste half an allocation unit per file, so the larger the allocation unit, the more wasted space per file; and the more files you store, the more total space wasted.
exFAT is not journaled, so there is a higher probability of data loss in the event you lose power or your system crashes
No. It’s very dangerous, it’s not designed to work with constant load, it doesn’t have journal, with any corruption it will have heavy loses, it’s designed for media storages for removable drives like flash drives/cards.
It’s also uses a big cluster size, you will waste up to 40% of your space in case of storagenode because of small size of each piece.
I would strictly recommend to avoid exFAT, NTFS, btrfs (except Synology, they fixed most of drawbacks and bugs, but have not pushed their improvements to the upstream) for storagenode’s data in Linux.
exFAT must not be used for storagenode in any OS.
At the moment I would recommend to use only ext4 in Linux.
You can also use zfs if you want to have RAID, even if it’s not recommended, you have at least 1GB of free RAM per each 1TB of disk space and want to have a more complicated FS, and willing to spent more time to learn and maintain it.
If you want to go with zfs I would expect to have a more fast wearing for disks in my opinion (as with any RAID), than if the disks would be used as a separate drives. The ECC RAM is highly recommended.
Maybe you would take a look at GoodSync and Gs Richcopy 360, both work on windows and can backup/copy to Linux, Windows, NAS, clouds…etc.
Both can do delta /bulk replication, can copy NTFS/Shared permissions, and have many other features, search both