It’s not always about resources, especially if you’re using the computer for other purposes it’s also a clean separation of different systems. So a bug or glitch on one system, won’t affect the other.
Furthermore, in my opinion running multiple instances on Linux if far more easier then on Windows for multiple reasons.
Both is possible, but Docker is precooked by Storj-community and therefore easier. For service you need to tweak things yourself and also manage the updating of the binary, but it’s less resource intensive.
It’s only viable option, if the CPU and mobo supports virtualization. However, it’s an option too. But considering that the Docker desktop for Windows uses either wsl2 or Hyper-V under the hood, the docker desktop is much more simple option than using these VMs directly.
actually you wouldn’t see a difference in Linux. The node will be executed in isolation, but on the same kernel. So there is almost no overhead.
But running a Linux binary/docker image in the Linux VM has a big difference and overhead in comparison with running a Windows binary, especially if you use a Hyper-V engine for said VM: you will provision resources to it beforehand, unlike wsl2 which uses only needed resources (but the latest versions of wsl2 are trying to eat all available memory for caching and buffers as usually Linux do).
You may want to use binaries, if your OS (like in some routers and NASes) is not able to run docker, otherwise I wouldn’t spent much of time to create a service (two services actually - for storagenode and storagenode-updater), for several nodes the complexity is only grows (you likely would need to clone binaries for the next node to make them independent of each other).
I personally run nodes with Docker desktop on Windows, and the Windows service. They works fine.
I also have had a docker node on Raspberry Pi 3b+, but it stopped to boot (the OS on SD card is likely corrupted again) and I do not have a road to there, so just switched the power off with a smart plug.