Looking for some advice and possible options.
My current layout is a 6 disk z1 vdev/pool in truenas core with (6) 10TB disks. Those disks are then mounted over an iSCSI connection to a Windows Server 2022 vm. The resulting pool is 30 TB for the Storj node for storage, 3 TB for trash, and the rest lost to the data tax man . Judging by my fill rates a year ago when i joined and sampled this project, it was going to take decades to fill as i was only getting around 2-400 GB a month. This left me satisfied with my set up. The last few months have seen data growing by more than a TB a month. I am now up to around 6TB and want to address this sooner than later. I am realizing now that I have a bad infrastructure in place for easy growth,and my space limits will run out far earlier than expected. So my options/thoughts are:
Upgrade Truenas Core to Scale for both options to allow me to install the Storj app locally on my storage system and cut the storage losses/overhead and complication from using an iSCSI connection.
Hook up an external drive and pass through to the vm and move all data on the current pool to the external. Remove the iSCSI connection and then host the data directly from Truenas to a local app installed, then move the data back over to that.
Leave the current setup running. Create a new vdev/pool in Truenas and then move the data over to that from the current.
Graceful exit all data, and start over.
If there are any better options, I am open to them. If there is a better way to set this up, I am also open. I figure this way will allow me to do in place upgrades of the disks over time to simply expand out the storage pool, which doesn’t seem like a viable/feasible option with my current set up. Plus I will lose less disk space off the top.
Questions are, are any of my above options a best/doable option? If I have to exit and start over, do I have to go through the slow vetting process all over again? Any other gotchas I should be aware of?
Welcome to the forum!
Perhaps this one could be an option, if the storagenode’s data is directly available for TrueNAS (usually you create a virtual disk, which then you expose via iSCSI, so data would not be available directly).
So, I think you would need to move data out of iSCSI virtual disk to TrueNAS, then install the app and provide paths to your identity and data. By the way, it’s better to place your identity to the disk with its data, this way it would be simpler to recover or move the node later if needed.
yes, moreover - you will need to start from scratch: generate a new identity, sign it with a new authorization token and start with a clean storage.
Thank you for the welcome Alexey!
Since the exit plan seems like the worst option, I will focus on trying to relocate all of the data.
Is there a way to tell if the data is directly accessible for Truenas (if I need to move a question like that over to the TN forums for more precise info, just let me know)? I normally do SMB shares, but this particular project required iSCSI. So this was my first foray into this type of config. If memory serves me right I created the vdev in TN, and then walked through the iSCSI wizard in TN to create that. From there I just added that location to my hyper-v server for a data location for my storj vm. Everything looks straight forward in TN, and I see where I added the LUN in hyper-v for that vm, but I have not determined how I made that final connection.
Is there a way to tell if my identity is indeed stored with my data currently?
Perhaps I am better off calling it quits with this node and trying to build a new one from scratch given the complexity of my set up.
If it’s a Windows node, the path to the identity is specified in the
"C:\Program Files\Storj\Storage Node\config.yaml" file.
Then you may check where is a target for it.
To check the availability of data without iSCSI on your TrueNAS you may try to setup a storagenode and see - can you see data or not: Docs Hub | Setting Up a Storj Node
if you can you may use it as a data location, you also need to specify a location of your identity, otherwise it would try to create a new one.
Just note - since it uses a docker version of storagenode, all data from the data location of the Windows GUI node should be moved into subfolder
storage, but for the storagenode setup you will use a parent folder, as usual. This is because the docker version will get it mounted to
/app/config inside the container and it expects data in the
config.yaml and folder
I have a identity.cert and .key both located in my appdata\roaming directory.
Since the identity is currently with my local vm storage, and not the datapool from TN, would i want to copy that over to the TN pool and then point the new node to that (assuming I can transition the pool over intact to TN Scale)?
I have a thread going over at TN to get insight on what path I should be taking there. I will check back in over here once I get some basic answers to your questions and see what they are suggesting at TN.
it’s better to copy/move the whole identity folder to the data location to keep the identity and its data together and use the new path in your
config.yaml to the identity (or
docker run if you plan to migrate to the docker version).
You can open /scale/scaletutorials/apps/communityapps/addstorjnode/ and expand the spoiler Click Here for More Information, then navigate to Determine How Much Local Storage to Allocate to Storage Node, there you can see how to provide a path to your data location and the identity.
Just a small update. I have finally managed to get a windows vm running in TN Core. After some blind luck, i also have the current storage pool from my active node, viewable and mounted into my test vm. I have also created a test zvol on a new pool, and expanded that pool twice with no issues. I think this will work better for a scalable solution. Granted, data is no longer pouring in the last week like it was all month.
I have moved my Storj\Identity directory to the same volume as my data pool, and corrected the locations within the config.yaml file. I have tested everything and data is flowing in and there appear to be no errors. Next process will be to move the data off the iscsi pool to a new pool. that will probably be a few weeks before i have the disks to facilitate that move. In the meantime, thank you so much for the help so far.