Virtualization Discussion

HyperV is bad, because it adds a complexity layer. This also applies to other Hypervisors like VMware, KVM or Xen. Although Hyper-V is especially bad, because it is by far the worst hypervisor there is.

Virtualization is good. It provides you: resource optimization, isolation, flexibility and scalability, disaster recovery and high availability to name a few. Hyper-V is probably the best on Windows, however, I would have to double check with the Rabbit (@arrogantrabbit). :- )


The perspective that Hyper-V or any hypervisor introduces undue complexity likely stems from personal difficulty with the technology, rather than an objective flaw. My own setup runs remarkably well, demonstrating that what some perceive as complexity is manageable with proper knowledge and application.

Virtualization technologies, including Hyper-V, aim to maximize efficiency and adaptability. If these systems seem too complex, it may indicate a gap in familiarity rather than an issue with the technology itself. For those of us effectively utilizing these platforms, they offer significant performance benefits.

Critiques should be grounded in a thorough understanding of the technology, not individual challenges.


It should also be understood though that if a node operator uses a virtualization layer, they are responsible for setting it up correctly, because it is not possible to provide reliable support for all sorts of choices made there.


To question the idea that virtualization causes more problems, let’s simplify things. Virtualization separates the operating system from the physical computer, which usually makes managing resources more straightforward. It’s like turning one big computer into several smaller ones, each running its own tasks. This doesn’t just make things more flexible; it can also improve security, efficiency, and even cost-effectiveness. Setting it up correctly is key.

But, when it comes to running something like a Storj node, the main thing you need to think about is storage. That’s the part that “really” needs performance. And you don’t actually virtualize the storage part, do you? Unless you’re setting up something complex like a vSAN, but how many Storj operators are doing that?

Most of the time, virtualization makes things simpler. So, have you come across any real examples where virtualization made things overly complicated? Especially considering that, for Storj operations, everything but the storage is pretty straightforward and doesn’t demand much.

1 Like

Just a randomly found post here, didn’t have to search too long: Storagenode on FreeNAS - #3 by Odmin

Also, though not Storj specifically, a sysadmin from my previous job somehow managed to configure storage in proxmox that had less than 50 random write IOPS. That was quite a feat! A combination of parity raid, fragmented thin storage and bad guest drivers. :person_shrugging:

TrueNAS CORE does include support for virtualization to some extent through its bhyve-based plugin, allowing the running of virtual machines directly within the TrueNAS environment. While this feature enables some level of virtualization, TrueNAS CORE itself is primarily a NAS operating system, not a full-fledged hypervisor like VMware ESXi, Microsoft Hyper-V, or Proxmox VE, which are designed specifically for creating and managing virtual machines.

Of course, unexpected issues can arise, but I’m not sure how many people are directly using storage within their hypervisor unless they’re utilizing something akin to vSAN. Typically, storage is managed through some form of iSCSI, NFS, FC, RoCEv2, or NVMe-oF (yes, NFS works just fine). However, it can be frustrating to face criticism on this forum for employing virtualization, as if that were the root of all problems.

Major corporations leverage virtualization platforms for highly demanding applications, whether in cloud or on-premises environments. To suggest that Storj cannot perform under virtualization, especially when the storage component isn’t even virtualized, strikes me as absurd.

But yes, I agree with you. A certain level of awareness is indeed necessary. Take me as an example; my understanding of storage was initially lacking. Over time, I realized the issues with my nodes were due to storage, not virtualization itself.

Now, I use TrueNAS over iSCSI (which might not work as well if you’re putting an Asus router in between or using an outdated, subpar switch), but in my case, I have redundant Juniper data center switches designed for such tasks. Additionally, it’s worth mentioning that I run TrueNAS CORE virtually within the VMware fabric.

However, I also maintain another physical TrueNAS server that’s all SSD. The VM running TrueNAS receives its HBA controller via PCIe passthrough, making it a semi-virtualized setup, one could argue. This configuration allows for a blend of virtual flexibility and direct hardware access, ensuring high performance and reliability for my storage needs.

And yet, when I explain this, I still hear claims that virtualization is inferior and that one should only run a single disk per node. This is when it becomes a bit exasperating. My current storage solution outperforms a physical machine with a single disk by leaps and bounds.

@IsThisOn Actually Hyper-V is not so bad in working with a hardware from my experience. The virtualization of storage is a bad thing and as using shared folders through some network filesystems like NFS or SMB/CIFS. However, on Windows it works pretty well, unlike the same usage over KVM, sometimes disks works not like a bare metal and you would have issues.
However, using an iSCSI may solve this issue too. Throughput of the disk actually not so good as everyone thinking: it is leave the host’s cache and you need to provide more memory to the guest to activate the cache, but it’s not so easy, if you use some metadata-hungry filesystems like ZFS or BTRFS.

You are misunderstanding me.
Hyper-V is not complex to setup. A well trained ape can setup Hyper-V. Hyper-V adds a layer of complexity.
Is running a Hyper-V VM in Windows the same thing as running it bare metal? There is no added complexity? No added sources of error?
A VHD is the same thing as accessing a SSD/HDD directly?
Can a bare metal run out of space without the OS knowing it?
No. That is all an added layer of complexity.

Virtualization is great and I love it.
Hyper-V is decent for a small business with pretty basic needs, but there is a reason why even MS does not use Hyper-V for Azure.

From a logical point of view, separating one machine, that has all the resources at it’s disposal, into more small ones, with limited resources, could provide all those benefits you say, except the performance. I doubt that a storagenode in a virtualized machine can perform better than a non virtualized one. At most can perform the same, in case you use the same config as the native OS.
If you change the enviroment, maybe yes, you could gain better performance, like if you virtualize a linux config on a windows machine… maybe.

1 Like

The virtualization itself - is not, but storage - yes, several times “yes”. Because you can deliver storage to the VM in multiple ways, and there is only few “good enough”, but they also - not a default. Hence the observations, that if the node has issues with storage - in 90% it’s virtualized and/or used a network filesystem to pass the storage, actually you can make it bad with pass thru too.

:thinking: Though, interestingly, I did not remember posts about issues from the SNO, who uses Hyper-V…, it’s always something on Proxmox/KVM/TrueNAS/etc. and usually with combination with Windows guest.

1 Like

I agree, it’s almost always mixing up OS’es, especially Linux host with Windows guest.

I’m also quite surprised why everyone wants to use VM, while lxc-containers just serve the same purpose. And even docker accepts some parameters, if you want to reduce resource use. And even is already some kind of isolation.

I think, VM is a unnecessary layer of complexity. Unless you want to run multiple nodes using a Windows guest in which case running Linux in hyper-V or WSL2 might be a solution.


You likely mean the Windows host, otherwise it’s a nested virtualization, and it’s bad^4 accordingly your rate :wink:

No, I mean a Linux host like ProxMox, libvirt/KVM/QEMU, …

Like Linux (host) > Windows (guest) > storagenode. Especially when there are more Windows guests, which is really resource inefficient.

But with conjunction with

that’s mean that you suggest to use a nested virtualization? VM in VM?
Because Hyper-V gives you an ability to run VMs, WSL2 is a lightweight Linux VM.

Otherwise you suggesting to run a Linux VM on the Windows host instead to a person who wants to run a Windows guest. But I do not see a relation?