Feeling saucy: SATA HDD -> NVMe SSD Spanned NTFS Volume

Having successfully recovered my corrupted ReFS SATA drive to an NTFS drive, the nonstop 90% utilization of my SATA HDD had me hankering for NVME SSD speeds.
I have a bunch of spare NVMe drives laying around, and after trying the AMD RAID Xpert2 garbage, then Microsoft Storage Spaces, I finally settled on a plain old dynamic spanned drive.

Wish me luck since I’m too dumb to just stick with a stable NTFS SATA HDD setup. I’m sure one of these will go corrupt and finally kill my node. Until then, I can confirm I’m having fun. Curious if anyone else took this route, since it seems the least sensible option to use lol.


Man if I had the IO of all those NVMe drives I’d be trying to find a way to run dozens of nodes on them (then as each grew large enough… move them to their own vanilla SATA HDD). But sounds like a cool setup: good luck!


I’m fortunate to have a mobo that supports 3 M.2 drives and x4x4x4x4 bifurcation on the PCIe slot. This also adds to the questionable stability.


Some of those consumer MoBos with multiple m.2 slots actually share bandwidth between the slots, making it impossible to reach the NVMe drives’ full potential.

Guess how I learned this fact…

1 Like

Most of these drives I’ve got laying around are from cheap Amazon deals. They do the same kind of sleight-of-hand marketing with the actual drives. They’ll advertise their full potential, but neglect to tell you things like they use HMB (off-drive system memory for buffering) and that you only get 2 PCIe lanes if you use a Gen5 M.2 slot (Gen4 gives you the full 4 lanes). You have to hunt around for those details in the fine print. Fortunately, anything is faster than a spinny drive so at least for my Stoj node I can live with the surprise limitations.

1 Like

It’s a mess, especially with Intel based motherboards that want to provide 3781 onboard devices but CPU only provides 16 pcie lanes. So they either have to use pcie switch (part of the chipset, the next largest chip with a heatsink) and share bandwidth, or do some walking on eggshells with muxing (if you want this pcie port to be 4x then your eSATA1, SATA5, and m2-1 will get disabled, etc).

Amd MLBs don’t usually have to jump through these hoops because their processors have a massive number of pcie lanes.

The situation is most egregious on consumer, especially, “gaming” boards, where marketing prevailed and they just had to have a list of features wrap around the box twice, never mind that only 10% of them could be used concurrently.


With StorJ? NVMe has been a huge boost in IO performance for me, but the raw throughput have never been higher than around 1Gbit on multiple nodes on the same machine.

Come to think of it, I would be fine with 8x NVMe on a single x16 bus, having only two lanes to each drive. I don’t need high throughput, I want high IOPS

I love the heatsinks. Are they after market? And what card are you running? :slight_smile:

Not with Storj, but I did learn this empirically. My workstation has one of these MoBos, I hoped for a software RAID1 setup with two NVMe drives. Turned out I was getting maybe 1 GBps total after RAID, while I expected ~3 GBps with PCIe gen3 drives.


The heatsinks are great, but I don’t think they came with any bands to secure to the drive.

As for the card, don’t even click this link until you verify your CPU/mobo combo offers to bifurcate at x4x4x4x4, x2x2x2x2 or at least x1x1x1x1 for your PCIe slot. Otherwise you’ll only be able to use 2 of the 4 NVMe slots on it. Hence why it’s only $30

Surely the current cost per GB of an NVMe SSD means it’ll never be profitable to run Storj on them?
(I do have a couple of nodes running on NVMe drives but that’s mostly because I like geeking out but I know I’ll lose money with that setup)

An 8GB SATA SSD is close to $500 which is absurd, but in the last year they were almost giving away 2TB and 1TB off-brand NVMe drives ($20 / drive sometimes) so my frankenstein setup only set me back about $250 and gives me 10 TB of SSD storage. I don’t plan to make that back anytime soon, but it’s fun to get it all to work, and uses up all my impulse buy NVME drives I have laying around lol


If you had a large number of nodes and are continuing to expand (like @Th3Van ) it seems like SSDs could be a good place to continuously grow new pre-audited/withholding-period-complete nodes. Like a 4TB SSD would fit about 7 nodes (each capping their 550GB) and have no IO problems. As you brought new normal HDDs online you could grab one of those ready-to-go Identities for it… and let the SSD start growing a fresh one.

I’m not saying you buy SSD space for this: just that if you had SSD space it seems like their effectively-infinite IO would work well with bulk-tiny-nodes.

Nobody has a setup like @Th3Van :sweat_smile:

1 Like

Cheers, thanks for that. My current motherboard does not support bifurcation, and I’m not going to get one of the more expensive offerings right now. If I ever do an “old enterprise gear loaded with NVMe” build, it’s most def going to be on my list :slight_smile:

Yeah, you’re right. As @AtomicInternet puts it, the higher capacity drives are almost never worth but for small nodes, it could be. I’ve considered “incubating” nodes to ~1TB in size on NVMe, and then move them to HDD afterwards. As I’m running on NVMe cached HDD and have no performance problems right now, I wont. Maybe for the next build :slight_smile:

You know that your suggestion will break Supplier Terms & Conditions, yes?

:hushed: is it possible that you mean 8TB?


There have been many nuanced discussions in this forum about the differences between functional configs and officially-supported ones. Don’t worry, I won’t beat that dead horse here :wink:

I so so so wish I had a reason to own more SSDs. The smaller drives are still the best $/TB… but even 8TB’s are coming down. And some 16TBs can look reasonable?

Ah… I wish it was spring 2023 again… flash was so cheap