Advice on initial setup

Hi all, I have the following setup now and am thinking of adding StorJ to make better use of my kit. I wondered what advice you have?

100/100 Fibre Ethernet line (Proper business grade not home fibre)
Battery backup for around 1 hour
Synology DS918+ with 30TB RAID5 of which 26TB is free
13th Gen Intel i7 Proxmox host with 1TB NVME and 64GB Ram, hardly used

I wanted to maybe use 16TB of my storage on the Synology but I’m not a fan of screwing around with the OS of the Synology or Docker as I reckon it could get overwhelmed. My thought was to free up 16TB to an iSCSI node and then link an Ubuntu guest on Proxmox to it. I see people saying that this could introduce latency but vs the much slower DS918+ hardware I would imagine the proxmox host would more than make up for it?

Also, if things go well, I have an 8 bay expansion for the Synology that I could spin up, I assume I would have to run a second node for this?

Any and all comments welcome :slight_smile:

Thanks all


Welcome to the forum @Stubblemonster

I’d start by just installing container manager on your Synology host, and run a single docker container with 1TB.

When that fills up in a couple of months, you can add additional containers, or just extend the one you’re currently rocking.

My biggest mistake with my setup was it being too complicated too soon. I’ve spent countless hours getting everything perfect - and I think it is now - but it could have been much easier

1 Like

Hey @Ottetal thanks for the help. I’m an experienced network admin so apart from storage metrics which may take me a while to learn, setting everything up and running it wouldn’t be so much of an issue.

The NAS performance does concern me though, the NAS device is already a target for several backups over night so I didnt want to add to its load by sticking a container on it.

If I were to start with your suggestion is it possible to migrate later or do you have to retire the node and start again?

Whats the reasoning with starting with 1TB?

You can migrate (almost online) it any time using this guide:

And it would take a lot of time to fill even 1TB, so you have plenty of time to decide later.

I am an experienced VMware admin as well, and do nothing else at my dayjob. My current setup is as follows:

Storage side:
Synology rs3617xs acting as SAN

  • 7x 20TB disks in RAID5, backed by two RAID1 2TB SSDs as read/write cache
  • 2x1TB SSDs RAID1 as database only disks
  • 10Gbit fiber uplink to VMhosts
  • 1Gbit uplink to network/management
  • 5x ISCSI shares, presented to VMware as datastores


  • 2x identical hosts, consisting of
  • Intel 12400, 64GB of RAM,
  • 2x onboard 1TB NVMe disks, RAID 1 for non-storj VM data (C:\ drive, root partition of Linux hosts)
  • 10Gbit fiber to SAN
  • 2.5Gbit VMnetwork/management


  • Windows 10
  • Custom keep-alive scripts, to periodically check connectivity/version/status
  • Custom disk management script, auto expanding node disks as they grow.
  • All unnecessaries are pruned of the OS; current RAM usage for 4TB StorJ VMs are at ~2GB


  • Ubuntu
  • Docker
  • Shit just works

… which is all fine and dandy at the size I am at now with around ~50TB stored on ~10 nodes, but is much overkill for a first node.

A small node (>3TB) will put almost no extra load on your NAS, is easy and local to manage through docker -but most importantly- you don’t really have to think consider the database health of your nodes running on an external VMhost, when your Synology suddenly decides to reboot during an OS update … or the connectivity to disk die becase your connecting switch is updating/congested/whatever.

To answer your question:

Whats the reasoning with starting with 1TB?

It will take some time for the 1TB to fill up. In the meantime, you’re not using the space on your Synology for nothing, and you can see if all works out well, The guide that @Alexey posted is great for migrating the node, but like I said, it’s also possible both to expand an existing node and to just create an additional node.

Sounds as overkill to run a storage node.

especially - this.


I don’t see plain overkill, I see:

Forward planning - he may be planning to grow this much bigger
Using an existing platform to good use - He’s could be running this stuff already and have excess capacity like me
Overkill - it could be a “fun” project with an excuse to play and learn whilst mitigating some of the costs by earning something back

As for windows 10, I would expect he’s set up to monitor this OS and platform better than others and is more comfortable with it. I’d throw more money at an OS I was set up to keep a close eye on vs saving money and potentially having more downtime.

And it would take a lot of time to fill even 1TB, so you have plenty of time to decide later.

Thanks, this makes sense. I wasn’t sure what to expect from a capacity point of view TBH. If it’s a really slow burn then at least I get some experience on the way.

This might also explain why people on the forums seem to have several small nodes all pointing at the same storage, is this a way to grow your storage twice as fast or is the Storj logic clever enough not to serve data this way?

Filling one TB may take 2-4 months. Not predictable.

Its violating ToS if there is not 1 cpu core and one hdd bigger 500GB Per node.
(buying something for storj will not always get profitable somehow. )
So they risk being DQed.
All nodes on the same /24 subnet are threated as one, and should be.
Circumventing this or “code manipulation” → risk being DQed.

As reasonable reasons for running 500gb 2.nd nodes are:
moving them later elsewhere to different ip subnet and just letting them vetting before.
“backup node” as long with 500GB space its fine, but rarely makes sense.
reducing load from the first node, or its full (i did that, because im stupid, and will move later to my powerfull pc with the 2.node)
raid with more drives (already build, may clock io speed if wrong FS or clustersize), spreading risks between nodes, is fine, if not the same drive.

mostly this multi nodes server should be already for other purposes build RAM/SSD cached big NAS/ZFS pools etc.

More nodes will not give more ingress, instead it will be distributed between them, so each node will get only part of the common ingress. So 2 nodes x1TB or one node x2TB will get the same amount of ingress.
Running multiple nodes makes sense only if they running on own drives, not sharing the same (which is violation of ToS), this way you increases durability - if the one node die, you will still have a second one, losing only half of the common data, not all as in case of only one node.

Of course, if you have RAID already for something else, it’s OK to run a node on this RAID, otherwise it’s better to run a node per drive.
See also RAID vs No RAID choice and RAID0 vs. RAID1 vs. NORAID for 2x 1TB Drives?.

Thanks Alexey,

I have ssd accellerated read write cache on the array, it should be plenty fast enough to do what I need. I actually have 3 separate arrays on different devices and two different ip subnets but since it’s the same line I expect thats against TOS.

I could spread the load across those arrays but I just see that introducing more potential failure points. My array cant be down for long for its existing usage so i’ll make sure it stays up.

You may run several nodes if they are in the same /24 subnet of public IPs, but they will be treated as a one big node for uploads, and as a separate ones for downloads, repair and audit traffic and uptime checks: we want to be decentralized as much as possible.
Each new node must be vetted, while vetting the node can accept only 5% of the customers traffic until got vetted. To be vetted on the satellite the node should pass 100 audits from it. For the one node in the /24 subnet of public IPs it should take at least a month.
Several vetting nodes may have a longer vetting period, because they would share the ingress, so each would have less data and less audits.
Thus it’s recommended to start the second node only when the previous one almost full or at least vetted.

Thanks Alexey, I’ll get set up today and get started. Thanks for the tips

1 Like