STORJ VMs vs Docker Containers

Hello All!

I’ve been gone for a few months (but kept my node online) but in the last month my Drobo I was using as a iSCSI target decided it was done and toasted itself and most of the drives in it. Such is life, but I did get a nice amount of money out of the node while it was up. Enough to build a small PC to replace it as well as a few NAS 6 TB drives to start over.

So now that I have the ability to start again I wanted to get a feel for what the best solution is today for rebuilding. I’m debating between the following:

I’m going to keep using virtualization to some degree, but here are my options. If I’m missing anything or someone has a better solution, fire away.

  1. Windows HyperV Host (either 2012 R2 or 2016) with individual Ubuntu VMs (1 vCPU & 2GB Ram) with OS drive on SSD and STORJ mapped data mapped to a single 6 TB drive. As the drive fills up spin up another VM with the same specs and so on and so on till I run out of drive bays (10 bays).

  2. ESXi Host (similar to above)

  3. Run 1 Ubuntu VM with more than one storagenode Docker, each with their own drive.

  4. A FreeNas solution running ZFS? Does storagenode run on FreeNas?

  5. Hardware Raid? Last time I was around the forums this was not the preferred solution.

While the Drobo kicking the can sucks, I was never 100% happy with the setup so I have the ability to start fresh.

Thanks!
Chess

They gave me a couple of good suggestions to try.

If the machine is not going to be used for anything else but Storagenodes… why not plain Ubuntu baremetal.

Then make a Docker container per Disk you add.

1 Like

How about proxmox?

Running one vm per node is a waste of resources IMO, there’s no downside to having many nodes on one host.

2 Likes

Proxmox is also a very good hypervisor indeed.
Even has ram dedup.

Thanks for that link. I’m just reading through it now.

As for baremetal Ubuntu, I considered that, but I like the flexibility that virtualization gives me. I can easily upgrade the hardware under the hood and the VMs don’t really care. While that add a bit of overhead, it’s pretty low.

So, based on your suggestion, Ubuntu VM with all available resources and then a separate storagenode docker container per disk. This was kind of what I was thinking… I can even name the drives so that I know which one belongs to which container. I kind of like it.

I really don’t know a lot about it. Any setup guides? I’ve heard about it, that it supports dockers and ZFS out of the box, but always had licenses for HyperV and ESXi.

This is exactly why I wanted to ask this.

I’m off to see if I can learn about Proxmox and see if I want to take the jump.

Docker nodes can be easily moved to new hardware… as all dependencies are covered by the image.
No need for hypervisors or VMs in that case.

The Image contains the needed program binaries and such, you mount the filesystem to the container.
When you move to new hardware, you just setup a new container with new images which mount to the disks from the old hardware. if that makes sense… :slight_smile:

you cannot get any more flexible i think.

1 Like

I’m an old school VM guy, so I have to get up with the times. I do run a lot of dockers (for my Linux ISO collection :stuck_out_tongue:) so I do see the benefit of dockers, just a little hard for me to grasp that the host is no longer important.

I think based on the suggestions you and @hoarder gave me I’m thinking I might look at Proxmo as a docker host, and take Ubuntu out of the mix. Save some level of resources, and leave me with the ability to run a VM on here in case of an emergency and I lose my other server.

ESXi is superior to anything else, ofc you won’t get to enjoy zfs but really raid6 can sort of perform the same task… just not at the level zfs can…

i jumped directly from windows and into proxmox using zfs, tho keep in mind there is a good deal to learn about zfs, it’s quite different or atleast was for me… only 4½ month in so… still very much a rookie.

hyperV i tried and it was actually why i switched to linux, because i wanted to do a basic passthrough on a usb port into a vm … and it was painful… tried to get that to work for weeks, and no luck… in proxmox it was a 3 click procedure … :smiley:

that said, proxmox has its own caveats, and i have a bit of a love hate thing going with it…
its debian, so thats good because debian is the most used and thus the most developing, and proxmox is basically just an addon to debian even tho it can be installed directly…

meaning it’s basically the newest stable debian kernel, which is nice…
some stuff in proxmox tho… like move a vm virtual harddrive to another vm… then you are into the concole, renaming the hdd files and entering the vhdd file location into the vm config file… and the names are kinda nonsense…

EXSI would be my choice if i had to set it up again… but proxmox is free and if you are skilled in a debian termnial its most likely going to be great…

the gui in proxmox is nice… but… it’s not much of an interface imo… or maybe i’ve not taken the time to learn how it really works… :smiley: very possible…

and it has some sort of network configuration trap built in, so it will shutdown the network when one starts to tinker a bit to much… i suppose its to get support calls for easy subscriptions.
tho a correctly configured network.conf or whatever its called… i think i had to set hotplug on my nic’s for them to launch every time on reboots… only way i found around the issue with it setting them offline them for some reason…

very happy with zfs tho… and proxmox is rock stable, haven’t had any issues of note in that nature, zfs does like it’s ram tho…

and i would ofc run some kind of redundancy… i mean why give one self extra work because of crappy hdd’s… but i suppose if you got 1 x 6tb for the project for now… then that will ofc have to do…
but personally i would recommend a raidz1 with 4 drives, that makes you able to do a pool of 3xRaidz1 thus allowing l2arc and slog to span all the drives and get a hell of a lot of iops

Dont realy understand why people try to make it all difficult, just make on windows win gui nodes as much as you need, 1 per HDD, smple to manage, no overhed. Make things simple then will be less problems.

2 Likes

Maybe I’ll stick with ESXi. I know it well and already have one VMHost here. Pass through of USB works on ESXi, not that I need this here.

Still reading up on Proxmox, and it looks nice, just not as polished as ESXi.

i did my setup because i wanted option… i can use my pool for 100 different things while its running a storagenode… it has redundancy so i don’t have to worry to much about the data stored, and its multiple times faster than a regular hdd, and no write holes if the system crashes…

and i’ve been continually over the last few years started to hate windows for many reasons, so the switch make sense to me…

1 Like

ESXI is brilliant… but only tried to use it briefly, but it’s the general consensus of all the pros…
and the learning curve of new stuff can just be painful… ofc i bet you can install proxmox as a exsi vm if you wanted to test it out lol

Not difficult, I just not a fan of running services inside Windows for anything that needs 24/7 up time. At least not in a desktop Windows OS. Windows Server is not bad.

Are you suggesting running all of the nodes on one Windows Server? I never consider that. Is there a guide for installing or setting up more than one GUI node on one Win install?

I made my own soft to make it posible.

So nested virtualization :slight_smile: Docker, on Ubuntu on Proxmox on top of ESXi. Is that what they call a hyperconverged setup?

I have a few days till my new hardware comes in, so I’ll play around with a few of the ideas here and find one that makes sense for me and keeps things as simple as possible.

Appreciate everyone’s ideas!

1 Like

not practical thing, but allows one to test it out see if there is something cool that might be useful… but i bet ESXi will win out for a good few years still…

Hyperconverged is where your vhost cluster is also your SAN cluster.
proxmox on ceph or. esxi on vsan. are examples

Some of the things ESXi has to offer are really great, but when software defined storage is involved it just loses. Proxmox lets you use anything that linux has to offer, which is mdadm, luks, lvm, zfs, ceph. Storage can run directly on host, so no need to pass individual drives or controllers and deal with the limitations. Speaking of which, passing individual drives in ESXi is a major pain.
Many ESXi features are locked behind vcenter - so you need to run one. Cheapest legal option of getting a license for it would be a $200/year vmug advantage subscription, but even if you deal with that, the appliance itself is a resource-hungry beast.

FWIW I’m using esxi now and I’m switching to proxmox becase of the reasons above. The only thing I miss so far is the ability to use workstation player as a remote console.

certainly not as polished as VMWare is.
But i do not like VMWares crappy webgui.

I run Hyper-V clusters professionally, i could run my nodes there.
But Proxmox or XCP-NG is also an option for me.