Hiya StorJ forum.
I am consolidating some of my disk arrays, and in the process I am thinking of redesigning my storage setup. Therefore, here is my current setup, along with the explaination to my choises
- Synchronos 1Gbit WAN
- Two identical VMware nodes:
- 6c/12t,
- 64 GB RAM
- 2TB local NVMe
- 1x 1gbit for VMnetwork
- 2x 1gbit for vMotion and ISCSI
- 3 Synology boxes, each with 4 disks in RAID5. Each NAS has 2x 1gbit links to provide ISCSI, and 1x for management and regular file transfers.
- 1 is running btrfs because it also hosts my own files, and I use the snapshot feature there.
- The other two are running ext4, and hosting mostly StorJ files.
- 14 Nodes, ranging from around 2TB to 10TB. All of them are Windows VMs.
- Why Windows? - Because I am comfortable with Windows, and StorJ was intended primarily as a good way of mimicing a “real” user with real data when I got my certs. It has since turned into a great hobby
- Why Windows? - Because I am comfortable with Windows, and StorJ was intended primarily as a good way of mimicing a “real” user with real data when I got my certs. It has since turned into a great hobby
- Each VM has the following:
- 3 cores
- 6GB vRAM
- C:\ drive on the VMhosts local NVMe. I have not yet moved the StorJ databases to C:, but this is in my pipeline… soonish.You know the feel.
- E:\ (StorJ) is made by enough 1TB VMdisks spanned together in Windows.
All LUNs that present VMware datastores for StorJ on the Synology boxes are thick provisioned, and all the VMdisks are thin provisioned in their VMware respective datastores. When a VM reaches below 1TB on its E:\ drive (StorJ data), I have a Jenkins job who automatically adds another 1TB disk to the VM, expands the VMs drivepool, reconfigures the config.yaml
file and restarts the node. This is working flawlessly and without my intervention. The thought process behind this approach came about, when I was still at two nodes on a single Synology. By splitting my StorJ data into 1TB chuncks, I could more easily juggle data between the Synology boxes, which was great at the time, because it allowed me to more easily utilize the space I had at the time. I am now worried, that i leaves too much performance on the table.
I’ve recently aquired a 12bay rack Synology. It currently has 6x 20TB disks in a RAID5 and 4x 1TB Sata SSDs in RAID10 for caching. I also got two additional disk shelves, each with 12 slots. When I fill up the main unit, I will create a new volume on the diskshelf, which in time will expand to also have 8x disks and 4x SSDs. With my current growthrate, it will take me a fair amount of time, before I run out of disk slots to expand into.
I am aware the StorJ discourages the use of RAID for the disks, but I house a lot of other data, so for now having RAID arrays present storage for my VMs is my preferred way of operation.
Here comes the question(s):
- Since I no longer need toShould I ditch the StorJ VMware datastores entirely, and just make LUNs that the VMs attach to directly, thus eliminating a layer of virtulization? (Synology Boxes will still present ISCSI LUNs to Non-StorJ VMs)
EDIT: I’ve fixed the formatting. This turns out to just be markdown - great!