Storj node - configuration

I would not do this for multiple reasons.

  • 2TB is too small to be effective. It’s better to turn it off and recycle.
  • entire filesystem runs in that memory constrained VM means there is no way there will be enough room for metadata to get cached, meaning when usage picks up your drive will choke. 200 iops it can provide is not much to play with.

Better approach is to allocate space for the node on your main storage array and let it benefit from having access to massive amount of ram for metadata, and any other performance improvements you have potentially made for your main array, such as SSD caching, tiering, or pooling: (you would not spend money improving node performance alone, but since you would have likely already done it for your other requirements — letting storagenode benefit from from it is free)

As an example one may have a home NAS, with 60TB of extra space, that is projected to fill up in the next 10 years, but now is a prime candidate to be shared with storj. It can be a zfs array of multiple vdevs and a special device, so the home shite user runs the server for runs fast. Coincidentally, the same performance improvements benefit storagenode, and therefore 3 nodes one may run on that sever generate completely negligible load: metadata is entirely in ram and on special device, as are databases. Disks see 10-20iops, even when file walker runs, there is massive headroom.
No new hardware is added just for storj, let alone inferior usb contraptions. Entire server could be bought by parts from datacenter recycler. True story, personal anecdote.

So perhaps the right thing to do here is get rid of the zoo of mishmashed parts and devices, offload all on eBay and buy one old enterprise sever, with bunch of disk bays, over 100GB of ram, and dozens of cpu cores, configure one array, and run everything there. Fewer parts — easier to maintain, fewer kWH to pay for, fewer things to break, and extra money in the pocket after selling all that ancient crap.

I don’t see the point in selling the servers I have.

All three have the same configuration:
AMD Ryzen 5900x
128GB DDR4 ECC RAM
MOBO: Gigabyte MC12-LE0
NIC: 2x 1Gbps onboard, 2x SFP+
SSD: 2x 256GB SSD - RAID1,
4 bay servers have 2x 2TB SSD-RAID1 for VMs
8 bay server has 6x 512GB SSD-RAID5 for VMs

My NAS is: Synology RS1221RP+ with 8GB RAM and NIC 2x SFP+
HDD: 6x Seagate EXOS 4TB, 2x Seagate IronWolf 3TB

VM with Storj node has a 2TB disk added via iSCSI.
True, 2TB is a bit too small, but I ran this node as part of the tests

Ok, great. The biggest issue with this setup is lack of sufficient ram to cache metadata. I don’t remember if Synology can cache block access. Otherwise, keeping node data on the main array (over NFS) would be better. Just don’t keep databases on the nfs share. I would also max out ram on the diskstation 8GB is not enough

You can of course give massive amount of ram to VM instead, and continue using iSCSI — but then you are wasting resources on the node as nobody else will be able to take advantage of that memory.

Another approach is to use OS level virtualization, there is no need to abstract hardware for storj, nor for vast majority of other services people tend to run.

These are not servers, they are workstations/desktops repurposed to be used as servers and as a result requiring tons of workarounds, including for the lack of PCIE slots. Imagine that, amd CPU and limited by MLB! Benefit of amd processors is their massive count of pcie lanes — so you get worse of both worlds: horrible thermals and crippled io. (and Synology on top to choke everything still breathing). I would scrap everything and start over — but I realize it’s none of my business, and way beyond the scope of this forum, so I’ll show myself out

Don’t sell anything or buy anything :slight_smile: - use what you own. Set up just one node… or at most one-node-per-IP. Then wait. Check on it if you get an email from Storj saying it’s offline. Wait some more. Maybe look at the UI near the end of each month to see if you’re close to a payout. But in general ignore it and live your life.

Eventually… a HDD will fill. It will take longer than you think. Then you can make a more informed decision about expanding (or not). Good luck!

3 Likes

This only works out if time is worthless. In my mind, time is the most expensive resource. So if I can throw money at the problem to save time — I’dl do that in a heartbeat, not even a question. In this context — to not waste time figuring out how to workaround lack of PCIE port — I’d just get something that has all the ports I need and save massive amount of time.

When optimizing time+hardware you’ll find it’s often much cheaper to get rid of ill-fitting setup and get the right one for the job. And by that I mean right one for the home server, not storj. And then coincidentally, it’s also going to be great for storj.

I do agree with the rest.

I started the node on this N3060 today.

60GB SSD drive is only for the system connected to the SATA port, and I connected another 8TB drive to the second SATA port.

Node has been working for a few minutes

I’ll slowly be collecting parts to put together a server.

What I’ll be looking for:

  • 3U or 4U case for 16 drives or more
  • Supermicro motherboard, probably something from the X10 or X11 series
  • CPU some Intel E5 version L - with low power consumption
  • minimum 64GB DDR4 ECC RAM
  • disk controller in HBA mode, I’ll probably also need an expander or a second controller
  • drives no smaller than 10TB

We’ll see what comes of it.

How many public IPs do you have, again?
Just to get a perspective of the past…

Old node - gross income in 46 months:
AP1 - 48.6$ (~7.5%)
US1 - 379$ (~58.5%)
EU1 - 221$ (~34%)

Young node - gross income in 8 months:
AP1 - 0.4$ (~1%)
US1 - 34.3$ (~79.5%)
EU1 - 8.5$ (~19.5%)

Each one is alone on different /24 subnet.
The old one stores currently 16.7TB for the production satellites.

Good point. Please do not run physically collocated nodes on different subnets.

1 Like

I currently have a 28 IP addresses from two different subnets.
Plus dynamic IP address for my main internet.

what the group is getting at, is, storj data fills slowly, and if the nodes are at one location (one /24 IP address block) then the data gets split in between all nodes.

there’s a stealth “maximum size” that can be support at one given location, because after a while data is deleleted in equal amounts as the ingress.

in other other words… you don’t need 16 drives.

I have 5 drives/nodes at home and the growth in data is currently pretty anemic.

2 Likes