Server configuration

Hello Sirs,
I have already 18 nodes, some of them go down time to time, basically, as I suppose, becose of I use weak hardware or good hardware, but which I use for other reasons, when just nodes.
To increase reliability, I am thinking to buy some server hardware, and start to move part of the nodes there. So, I have a questions:

  1. Do you think this strategy is good? Which plusses or minusses you see? I have different disk volumes, several 1tb, several 2tb, 6tb and 8tb, 3x18tb and 3x20tb. Maximum node size is about 7tb at the moment, and I see, what 1-2tb nodes works very reliable, but those which are over 4tb get down time to time becouse unclear reasons and what is pity.

  2. Which motherboards you are use (I am going to use second hand, several year old)? Better if it will be in desktop case, becouse of less noise and less space required. But server case is not excluded, it just shall allow to locate some 12-24hdd’s.

  3. Shall I use sata multiply adapters, or look for motherboards with many sata ports?

  4. Now all nodes works under Win10+Docker, I quite advanced how to setup and keep it working. I tryed to migrate to Ubuntu, but fault. I am not an IT guy and what was too compligated for me. Can I use same Win10 OS, or shall I switch to Server version? Is there big different with setup?


I would begin with debugging the reason behind node stops. From that, you should know what is the weakest link in your chain. Maybe it is not hw related


+1! SNOs have reliable RPi setups: so I doubt it’s raw horsepower that’s the problem. Also make sure they’re not just restarting+filewalker’ing due to upgrades. I see 1.96.6 is spreading now: like the fourth version in three months?

And for OP: depending on how many HDDs you’ll have you may be much better off with a SAS HBA with 8-24 ports… instead of say a dodgy SATA port multiplier from Aliexpress. But if it’s only a few drives then motherboard onboard SATA ports are perfectly fine.

1 Like

From my personal experience:
Storj has higher requirements on the hard disk (if the hard disk is unstable, it will cause the node to stop). Other hardware does not seem to have high requirements. I use double the CPU server to run it and occasionally the node stops automatically or the panel stops working.

Currently running 13 nodes, I check the health status of the nodes every two or three days.

1 Like

Don’t buy anything. Unless you must.

Most likely reason is fragmentation. check that, defragment if neccesary, can be run while node runs, set the node to full temporarily to not runnning defrag over weeks.

Also check the Filesystem if its NTFS and 4k or 8k clustersize.
in the shell:
fsutil fsinfo ntfsinfo [your drive]

Check the logs, or redirect them, if you don’t have already.

then search for the “fatal” error


That sounds like a nice setup! I see a few nodes about 8-9TB full: how long did they take? Maybe two years?

Hi just as a general idea I have 4 Nodes in one network that I started bit by bit. As they are all under the same IP they share traffic between them:

Node 1 (2022-12-24): 5TB
Node 2 (2023-03-05): 2.75 TB
Node 3 (2023-04-13): 2.21 TB
Node 4 (2023-04-16): 2.17 TB

So overall in a bit more than a year I got ~12 TB


We do not recommend to buy anything specifically for Storj, it’s better to use what you have now and what will be online anyway with Storj or without. The ROI is not guaranteed, because usage depends on the customers, not hardware.

From the HW requirements perspective - the HW can be very weak (we have nodes even on OpenWRT router), and plenty SNOs with Raspberry Pi even model 2.
Do not use a complicated setup like RAID any kind (unless you already have it) or a not native filesystems for your OS (like exFAT on any OS or NTFS under Linux).

1 Like

Node running time is between 1 year and 1 year and 3 months.
It seems that filling data is significantly accelerated after adjusting the traffic unit price.

For me it is more like a hobby, I like Storj idea. So I do not care too much about fast ROI.
Probably HW requirements are not high for other OS, except Win+Docker.

Firs I used gigabyte ga-j1800n-d2h, more or less it works under GUI and under docker with 2-3 small nodes. Afrer volume of data increase 1Tb, it starts to make errors, which stops node and leads to databases error and brings node down. Now I do not use those HW.

4 machines each 2-4 enother node works on motherboards. Win10 + Docker.

  1. Asus_prime_b250m + i3 6100 + 16Gb
  2. Asus_prime_b250m + i5 7400 + 16Gb
  3. Asus maximus viii hero + i7 6700k + 16Gb
  4. msi 460m-a pro + i5-10600KF + 32Gb
  5. Gigabyte GA-H61M-DS2V + i3 3100 + 16Gb

Most stable nodes works on the 1-3 computers, they are more or less powerfull and not used a lot for other reasons. Just normal home desktops.

Computer 4 is my working machine and I use it every day for heavy software like AutoCAD, Revis, Navisworks etc. 2 nodes get down in last 3 months, but both happened at night time, there were now operation from my side

Computer 4 permanently used for video surveillance server and for automatic electricity account server for our village. It also was fine, before collapse of one of node month ago, I didn’t recognized reason, it is described here:PowerShell SQL command doesn't work.
I fought what i3 3100 is too weak and replaced it with i7 3770. It works much faster now.

I would like to prevent those node crashes, so what kind of checkings and actions you recommend to do?


I have several desktops and as I see, problems mainly related with workload on the computer. But what kind of debuging you mean? What kind of actions it could be?


Do you mean this kind of devices?

Will it works with ordinary motherboads, or only with server version?


Looks drives are not fragmented:


Claster size this I suppose:

Logs - as I understood I shall do it for each node? How often I shall do it?


What kind of disks are you use? Do you have Seagate Exos? Do storj have list of “reliable” HDD’s?


That would work fine, and works in any motherboard. It looks like the combo you linked has the 2x4sata connectors you’ll need too! Anything based on the LSI 920x-8i cards should be cheap and plentiful.

1 Like

No. Here are some external statistics

Just make sure they use cmr technicaly. Smr drives need performance adaptions mostly they fail faster.
Esp. 2.5".

I go with western digital and toshiba.


I read a lot of warnings on reddit that these cards from China are very likely fake.

I’ve seen many warnings too… from people who’ve never bought them: or even know what official ones look like. And even more comments from those who’ve bought them from the dodgiest of sellers… and they work fine.

If it takes the same IT firmware, and uses the same drivers, and works the same, do you care if they were built after-hours and/or unlicensed and/or use reclaimed chips… for $20?

It’s possible to get a broken or flakey card. It’s just not probable. They’re cheap primarily because millions of them were made and they’re ewaste today…

@Alexey Could you comment?


each node has a storagenode.log file. open it with a suitable texteditor.
or upload the file for us to see. (if its to big, stop the node, rename storagenode.log to your liking , restart the node)