New Storage node

Hi everyone.

As my first node is nearing capacity and its nearing the EOL (Ubuntu 16.04), i wanted to set up a new node.

I was planning to repurpose old hardware i have laying around.
It is now running Folding@home, and that could continue inside a VM if thats neccesary or turned off completely.

What i have for hardware:

HP Elitedesk 800 G1 DM:

  • i5 4590T Haswell 4C/4T @ 2GHz boostable to 3GHz
  • 8GB of RAM
  • Couple of USB3 Harddisks

I am going to try to boot from a NVME ssd, people didn’t have much luck with it… but we’ll see.
I can always fall back to SATA, but then i cannot use that place for a Storj disk :frowning:.

It has 6 USB3 ports, initially i’m going to use just 4.

I need some suggestions for the OS.
You guys know more than i do, so please tell me your ideas.

It is licensed for Windows 10 pro, and that is whats on there now.
I am familiar with Linux and that would also not be a problem.
I would like to run inside Docker, as i like the way watchtower updated my current node and it doesn’t steal much time from me playing games :stuck_out_tongue:.

I was thinking 2 ways.
Or… install Freenas and setup the disks with ZFS (8GB of RAM could be upgraded to 32 max) and run Storj inside a BHyve VM running ubuntu.

Or… use Windows 10, make use of Storage spaces to pool the disks together (and be flexible for upgrading the disks later on to bigger ones using the payouts)

Maybe you have suggestions on how to attack this.
Other OSes are welcome too.


1 Like

This basically means you’re running on linux anyway. If the host OS is windows, that would require virtualization and unfortunately comes with a lot of added issues. So you should really run linux on it.

I would really recommend skipping some of the layers of complexity you suggest. Instead just run one node per HDD. One container per node and no virtualization whatsoever. This gives you the most flexibility for upgrades and the best income per TB on average. You can freely mix HDD’s of different sizes and upgrade one by one when all connections (sata/usb) are in use. Or just add a sata controller to expand.

But that would mean… vetting 4 nodes.
Waiting for a long time.

Also… if one drive fails, it’ll mean that i lose my held amount.

isn’t that counterproductive?

Which is another reason i would like to retire my old node… because of the huge held amount i would like to receive.

As for docker on windows, it’ll be a Hyper-v VM with ubuntu running docker. not straight on windows.
With HV virtual disks on the storage space pool.

Can you tell me what kind of problems arise when virtualizing on windows?

adding a sata controller is not really possible :

There are many discussions already on the forum that discuss raid vs non-raid setups. On average you should be making more even if you lose a drive once in a while. The reason is that you can use all available space instead of wasting it on redundancy. Yes you would lose your held amount if a drive fails. But you’ll be making more money every month to more than compensate for that. As for the vetting, once at least one node is vetted you get all possible traffic you can get on your IP. So it doesn’t matter if others take long to vet. I’d spin up at least 2, maybe 3 one after the other. That way if you’re unlucky and a node dies early on, you still have other vetted nodes getting 100% of ingress traffic. I should state, there are many who disagree with me on that approach, so search the forums for other opinions as well. We seem to be fairly evenly split across both sides.
There is one additional complication though. Since you’re planning on using HDD’s over USB, that is far less reliable than internal connections. Building a RAID like system on top of that sounds like an especially bad idea as the likelihood of a connection failing is a lot higher on such a setup and that would instantly degrade the RAID or even kill it, losing all nodes at the same time.

The issues with docker on windows are mostly related to the use of a network protocol for sharing the disks. I think you might have the same issue with a Hyper-V VM, since I’m pretty sure that’s what docker uses in the background as well. SQLite is not compatible with these network protocols and this can lead to database corruption. Linux just provides a more reliable setup.

I hear you…

I think i am more at the conservative side of things.
I think it is better to earn a bit less per month, but being more sure to still have the nodes after a couple of months.
I do not think i can earn 40% more with separate disks as i can with the peace of mind i have when raiding/pooling the disks together.
It is also about me, not having to worry too much.
When a disk goes bad, i can change it and go back to sleep.
If a second disk fails while rebuilding… tough luck (When the array gets big enough there will be extra safety built in)

HDD over USB 3 is sadly the only way i can do this.
The device only has 1 SATA port, which is still being used by the boot SSD until i can figure out how to boot from NVME.
Also, i already got some USB 3 disks from v2 and a previous project.
Also my current node runs on one.

I think i could run some software warning me about loss of connection to the disks so i could go and investigate it and hopefully fix it before it is DQ’ed.

I’ll go and search on the forums if and how Hyper-V works with Storj.
Because it will be “Local” disks and not disks over iSCSI or SMB/NFS i think one would be fine virtualizing.

Thanks a bunch for your time and explaining everything!

Keep in mind that a disconnect while the system is online would instantly require a rebuild in most RAID like setups. A second disconnect is the end of the line. And USB drivers aren’t the most stable and could cause multiple ports to fail at the same time. You say you’re conservative with these things and I get that, but I honestly feel like in your case running single disks is the safer solution regardless on which side of the fence you normally fall on the RAID discussions. It’s possible storage spaces can better deal with connection losses though, I have never used that personally.

As for SMB. With the docker setup on windows, SMB is used to share the local disks with the vm. So that’s causing the problems. I’m not sure if there are options in Hyper-V to avoid that though. Actually in your suggested setup you can probably just give the VM direct access to the entire USB device. I think that would work.

Therefore you will ether need to:

  1. use a Samsung 950 Pro that comes with it’s own boot ROM or
  2. use a modded BIOS (which won’t be available for that machine, I’d say) or
  3. mod the machine yourself.

Also keep in mind that if you try to use an m.2-SATA SSD as a workaround, there is a possibility of the board disabling its 2.5 inch sata bay.

I thought of storing the VHDX files (Like VMDK’s but then Hyper-v) on the local disks.
And then making a Virtual Machine on Hyper-v with those disks.
Then install Ubuntu (Or any other linux os) and install docker there.

I am most certainly NOT going to install docker on Windows directly.
I’ve read the horror stories from people doing that (and docker uses a hyper-v machine anyway)
I think i’m better of making the VM myself so i can change the settings as i see fit.

I do struggle a bit to believe there is a network driver in between the Physical disks > HostOS > Hyper-v > Virtual Disks> Virtualized linux VM > Docker.
I do not think Hyper-V uses SMB to let the VM access it’s Virtual Disks that are stored on Physical harddisks that are plugged in to the USB 3 of the Host.

Hyper-v and USB Passtrough is not something i want to try.
Rather just create a big VHDX on a disk that is local to the host.

i am not sure on how Storage Spaces handles Disconnects of the disks.
ZFS handles it perfectly, it’ll just write back the changes since disconnecting.

I’ll go and test Storage Spaces this evening, just randomly unplugging drives while the VM writes to the virtual disks.
I’ll post back the results somewhere this week.
If Storage spaces cannot handle this, i could also buy new RAM to run FreeNAS and run the docker in a VM there, and virtualize the current OS.

Thanks for thinking with me.
I really appreciate it

mmm, sounds like too much trouble… and making the machine possibly extra unstable.

I have a m.2 NVME and also a m.2 SATA laying on my desk at work.
I’ll have a try.

Guess i won’t spend much at this.

My two cents:

Bare metal Linux (Ubuntu 18.04 or whatever you flavor)
1 disk/node

Make sure to place the data dir in a sub-folder of each mount (to avoid disq if a mount fails)

(Euh… guess it’s pretty much a +1 for @BrightSilence)

I’ve been running multiple nodes for a long time on single USB disks… no real issues. One location I’ve got a couple of SHM disks, so I’ve just spun up extra nodes to divide the load.

Anyhow… let us know how the setup ends up being designed :+1::slightly_smiling_face:

I think the SMB thing is only used when you share an existing volume/path with the VM. Not if you’re using USB passthrough or virtual disks like you mentioned. Though virtual disks may introduce their own overhead. I don’t expect any of the issues SMB has there though.

That said, I really think you should go the much more simpler route of just docker containers on linux OS with one node per HDD. There is a lot less that could go wrong there.

1 Like


A lot less could break.

But do not forget that this machine is already in use. and that that load should be transferred to the new install.
Be it VM, be it installed locally.

i have to think about it… and will test both ways.

Thanks for the suggestions!

and… the planned machine running win 10 is already at 2004 version.
Soooooo… Ubuntu native it is.
I do not want to risk the disks getting corrupted because Microsoft keeps butchering the updates.

If you would use one disk per node it will not affect you, even if I agree that Windows updates can corrupt your system.

1 Like

I guess the storj network is it’s own redundancy so any local raid is overkill.