Combine multiple various harddisks into a striped volume with Windows Storage Spaces?


I’ve been running a node for about 6 months, on a single 3TB HDD.
Current setup : Win10 pro, Core i3 (4Ghz), 4GB RAM, SSD for OS, 3TB HDD, exclusively for the node
ISP/Location : Europe, 1G downstream, 100mb upstream

Because of (or thanks to) various NAS upgrades and migrations, I now have a bunch of 3TB (7 or 8) and 4TB (2 or 3) HDD’s as well as an empty Sharkoon 5bay raid enclosure.

(so actually zero investment cost for the node)

I also have a Dawicontrol DC-624e RAID controller (for 4 internal SATA) and a DeLOCK PCI Express x2 Card > 10 x internal SATA 6 Gb/s serial-ata controller (non raid, but for 10 connections)

with all that, I though to reuse it to up my stakes in my node and migrate from the 3TB to a bigger volume.

Going for the max possible space, I was thinking of using the 10 port controller to hookup 10 HDD’s, and use Windows storage spaces to create a striped volume of at least 14TB (Microsoft way of mimicking Raid5) so I can afford a disk failure and time to replace it. (and possibilities to slowly grow the volume larger?)

The other option is to use the Sharkoon in raid5, but that would limit me to 5x 3TB disks (10TB useful). Also, I heard they are not always stable? (random disconnects)

Using the PCIe raidcontroller would limit me to just 4 disks

Are there a lot of experiences using Windows Storage Spaces as “volume” layer for a storage node?

Any feedback is welcome


I did that with an HP Microserver with 4x3TB HDD Enterprise Storage totaling 12TB…until one of the disks caput!
Result? Lost my oldest node.

1 Like

did you create a spanned volume or a striped volume with parity?
the first would be bad, the second should have allowed for 1 disk failure.

I will do a test to see how Storage Spaces reacts when I unplug a disk, before moving my node over.

single disk parity is still a mine field and you’re setting yourself up with an all or nothing approach. I would highly recommend running a node per disk instead so if one fails, you don’t lose everything.


Also, there’s no need to go big to begin with, ingress is kinda slow, so you’d be better off by starting with one node on one disk, and then when it’s nearly full, start a second node on a second disk, and so on.


running a node per disk instead

that would mean I’ve to setup (and maintain) 10 or 14 nodes on 1 machine

Start with 2 or 3. They’ll take a while to fill up anyway. Expand as needed.

1 Like

As far as I can tell, my current 3TB (2.6TB assigned to the node) is full, which is why I was thinking of expanding. The stack of 3TB drives I have laying around would not be used otherwise, so…

ok, but that means I’ve to rethink my SNO strategy, as I understand running multiple nodes on W10 is also not stable

How can I set up 2 different nodes on a same Win8 machine (using two different HDDs)?

You can use docker or the community managed toolbox.

I would personally pick the toolbox as docker comes with some complications, but keep in mind that it’s not an officially supported tool.

You can also switch to linux, which I would recommend if you’re not using this system for other stuff.


I am not shure if w8 is supported. As i know w10 only.

I run a node on Win8 64bit now without any problem, so it is supported. :slight_smile:

I ran with toolbox 40 nodes on 6 pcs

I’ll have a look at that. As long as I can minimize the time I’ve to spend on the box (which is why I thought of expanding 1 bigger volume), I’m fine.
However, if I’m right, additional nodes need to go through the vetting process from scratch?

Correct, so it’s advisable to start a new node before the others are completely full. This way you always get the full ingress.

1 Like

I do that on one of my nodes running 8 HDDs of varying sizes and there have been zero problems. I’m using an array with parity.

Thanks all
I will probably combine some solutions from above

  • setup an internal raid with my internal raid controller, and move my current node to that
  • keep my external raid separately, and create a new node on it
  • and put the remaining unused drives in a windows storage space
    that way, I have 3 nodes, and a failure is limited to 1 node