Whats the better hardware configuration hdd/nvme?

I plan to reduce load fom my 1. node by starting a second one on different pc in the same network.
Hardware will be 20TB Toshiba 24/7 persistent write cache,+ 500gb nvme (max.overprovision-ed)
A: should i use storemi to utilize the nvme as cache
B: just install on the nvme and move the databases in some weeks?
c: A+B on diferent partitions
(i know about the paycuts and test data/satelite deletion)

What OS and how much ram is on the system?

You definitely want to use SSD to cache randomIO; so perhaps StoreMI (it’s just Emotis fuse drive — a tiered storage) won’t be of much help for accelerating metadata access. Caching solutions (e.g. primo cache) will be more effective here.

If you can have ZFS — then using SSD as a special device to store metadata and small files while hdd deals with large files shall provide the best performance and scalability as the node grows, without increasing ram requirement and number of files stored grows.

1 Like

Windows 11 Pro and 32GB (3200Mhz)2x16 dual chanel, tested for errors. (15GB free)
rest of the system:
GAx570UD, R9-3900 (watercooled) 2TB SSD (839GB free) systemdrive, RX5500X Gpu.
don’t want to use linux or docker or primo cache.
So i think B is it.

But you dont have free SSD, it is OS ssd. And there is one BIG rule that never cache data to OS SSD, just never. It will slow down whole system. So you only put DBs to SSD that’s all.


Not sure how well NTFS manages caches but with 32GB I feel you shall be fine.

However, do get get a UPS. Otherwise you will lose data.

Re:”B”: It does not matter where you install. You want to minimize IO pressure on disks. So, move databases to your system disk, and use naked HDD for data. You don’t need another SSD. If you find that after the node grows hdd cannot keep up with IO — slap Primo Cache with the second SSD on top. (Store MI might help, but it’s not designed for this usecase. Primo cache is.) But this not going to happen until after about a year.

1 Like

Assuming this is not an SMR drive. You really don’t need to cache the data. Just move the DB’s to the SSD if you see IO issues. There’s no need to overengineer a storagenode. Especially if you’ll be splitting the write load across 2 nodes.


will losing databases kill a lot of the node, if noticed? or get they rebuild?

if not, its propably safer to only use the hdd with persistent cache,
other side is i don’t wan’t trafic on my system ssd.(esp. LOGs)

Power is realy stable here in Germany, id rather go with just the hdd and no UPS. (another120€wasted) also no place to put some batteries.

the nvme is at 32€ and would be a “nice to have, propably making the hdd less noisy in my little working room” also easyer to scan for errors “if they happen”

First will be full “soon”

They will be recreated. You only lose historic stats on the dashboard. But all of the data is non-critical.

Yeah, you should avoid that. There is really no need.

That’s what I thought (neighbor to the west in the Netherlands)… until I had people working in my house for renovations and they caused a short. I’ve been dealing with file system issues on my Synology ever since and having tons of trouble fixing it because I have a large 100TB volume and run into memory issues due to the high amount of inodes when running fsck on that volume. Needless to say, I have a UPS now. Should have bought one a long time ago. Though, fixing file system issues is a lot simpler if you don’t have such large volumes to deal with (nor Synology’s own crappy implementation which doesn’t work at all for me). I’m on the fence on whether you should bother with it for smaller systems. But there are very cheap UPS’s too, which may be worth considering.


The cheap ones won’t do it in that case.
Its a powerful pc. I also use it for gaming.
100tb synology is an entire other level. So good to have UPS here.

Are you sure that will even be profitable then leaving it on 24/7? Storj really doesn’t need all that powerful hardware. Usually an energy efficient NAS or even an RPi is enough to run your node. You’d earn the cost of an RPi back in no time with energy savings.

Yes. Because its running anyway 24/7.
Minecraft server+ other stuff.
I considered the 10w power for the hdd. And onetime cost of 345€. Cpu load won’t make a diff.
Also 800w solar panels are planned this year.
I want the 2.node vetted when 1. Is full.

1 Like

You misunderstood. 2t ssd system no touch on that. 500gb nvme will be bought for dbs.

i count on the persistent write cache of the disk in case of poweroutage and take the chances with the databases.

I personaly insist not to use the verry cheap option. :tophat:

Thanks at all for working it out :smiling_face:

I hopp on to this topic maybe a little out of the row.

i would also like to scold my hard drive and advise everyone against it. It is an Ironwolf NAS disk with 8TB, on which 3 nodes run and together make the 8TB full. The disk can’t handle the workload and the nodes go offline every few days, which has eaten up my online scor.
I also run a 16TB Seagate Exos that is also full and has no workload issues.
According to the spec sheet, the Exos can also do almost twice as much read/write operations as the Ironwolf.

This is against Node Operator Terms & Conditions, you must not use the one disk for several nodes. It’s even if not account that they will interfere each other, especially on restart, and will lose more races than a separate nodes with own disks. So, not a hardware problem.
You need to use not more one node per disk.


This is classic penny wise pound foolish.

  • you built a system from allegedly expensive parts, but refuse to spend $50 on conditioning power delivery
  • this forces you to make all writes synchronous, decimating performance of your capable system to that of a cheap turd.
  • no OS is designed to handle abrupt power failure, so you would need further performance sacrificing workarounds, and yet you can’t eliminate data loss completely.

Go buy the cheapest ups you can find. Literary, sort by price, pick the first. The cheapest crappiest UPS would be a massive improvement over no ups. Regardless of how stable you think your power is.

There is no reason to spend mental power coming up with justifications, weighting pros and cons. Just do it.


First: the system was not bought in one go. Its from jan.2021.
i did not even know krypto then.

The os 2tb ssd i had before that. The old pc with the old os ssd did not have nvme. [From 2012.]
Its in my basement rn completely funktional.
(I keep it for my kids.)

Second: The bottleneck is the 2tb ssd i had first. Still no dif to my wifes pc with nvme noticeable.
The gpu suits my monitor i use since 2015

or what you thinking of is like a cheap turd?

The hdd not bought jet has persistent write cache. So no pieces lost if power out.
As long the are in the cache of the drive they will be writen, After power restore.

What workaround do you thinking of? I see none.
Since the databases will be on nvme ssd with its own cache. That should be enough.

I saw a test once and its not likey that in case of poweroff crucial data is corrupted.

Last and final: My gaming room is too small to fit an box of whatever size near the pc or in the room.

With great respect for you and your opinion.

Von meinem/meiner Galaxy gesendet

I have the exact disks, also 18 TB ones. I run the filesystems with noatime, nodiratime, even nobarriers, because the persistent cache they have. I would not worry even without UPS. You will be fine, the have like 20 R+W iops/s

1 Like

I will order tomorrow. Chose the SAMSUNG 970 EVO Plus 1 TB, SSD Nvme for the databases.

Thanks for all the valuable Feedback !

P.s.: I like to spend A LOT of mental Power on my pc builds, (also do it for wife and friends), mine take usualy ca.3 months of planning with (given) budget and reading technical papers and tests.
I plan to by a new pc/gpu at 8+ years.
considering existing hardware/peripherals/noise/ventilation and the usecase. i do this since 23y as an hobby. no serious complaints so far.
(2021 i ordered the wrong watercooling with an case for a friend, budget was like 2000€ and ITX case. so it turned out an 2x140 radiator does not fit in an 140x280 space, so he used the boxed cooler and instead of returning the wa-co (ca80€) he gave it to me for my work on the build.)
in some weeks the “node” will be assembled.
identity created and port forwarding already done.