Static mount point question

Hi. Do I need separate disk for storj? I have 8TB HDD mounted as / and ofcourse I don’t want to store storj files under /. Can I create a folder under already static mounted disk and pass it to docker?

Yes, you set the Storj directory on the docker command to start the node.

Thx, I know this. This specific part in howto is not clear for me: Linux Users: You must static mount via /etc/fstab. Failure to do so will put you in high risk of failing audits and getting disqualified. I mean do I need to mount dedicated hdd in fstab or I can use any folder in existing fs? I have only one drive in the system atm.

Using part of your root filesystem is safe.

The danger is after a reboot where a separate drive or partition is not mounted and the docker process starts using the mount folder for storage.
If you have a separate drive or partition for storj, you should also use a sub directory so that if it is somehow not mounted then the process will not see the sub directory.

3 Likes

You can use your existing disk. The fstab static mount is to ensure the drive mount point does not change. I prefer to keep my OS on a dedicated disk (smaller SDD) and dedicate the larger HDDs to Storj nodes (one each).

1 Like

they are worried mounts might disconnect and leave the node running without access to the data which can get the node DQ in fairly short order…

any folder/file/physical storage device that just doesn’t get disconnected will do fine and can be assigned / mounted / used for running a storagenode off it…

generally i don’t find that booting the storagenode without the data folders connected is a problem, because it will not really boot correctly… afterwards i also moved my certificates / signature files or whatever they are called… identity files to the storagenode data folder so that upon boot it’s unable to do any harm… even tho it wasn’t really able to do that from what i’ve seen the couple of times i ran it without mounts.

however if the node is running and the mount is lost… then it will basically just continue in a free fall of continued failures until it is DQ or stopped.

not sure if the identify files with the storagenode data will fix that…