How many node do you have?

In this sunday morning, I am curious about our community and storj does not give us some statistics.
So I am curious to know how many now each of us manage.
For me :
5 nodes - Total size : 16TB (not full)

And you ? And do you know if there a limit on the number of node we can have ? :slight_smile:

1 node 24tb and not even half full yet… getting close tho…

and yes it sure is fun trying to move it around when there is underlying hardware issues… :smiley:


Wow, is it a NAS in raid 5 ?

old rack server with 12x 3.5" hotswap bays - 11 drives hooked up to one main zfs pool.

main storage pool of 3x3 HDD Raidz1 (sort of raid5) with 48GB RAM, dedicated 600GB L2ARC SSD and the SLOG SSD is shared with the OS.

so yeah it works as a NAS, but the storagenode is directly on the machine… there are issues with the database if run over networks… so…

but would be nice to be able to run it over the network… but the storagenode is pretty sensitive about that.

1 Like

There are a few statistics here


I know, but nothing about size of node.

Unfortunately yes. Statistics like Sia would be nice.
I have 3 nodes with 8TB each.

1 Like

9 nodes, 24TB total, 3 different ISPs

1 Like

3 nodes. 28TB Total, Just over 13TB used.

Started of just using spare space on a Synology DS3617xs. But I’ve since expanded that array with Storj earnings.

The other 2 nodes are a Drobo 2nd gen that I use for extra backups, but had 2TB of spare space and a 2TB HDD in a USB enclosure that I had lying around doing nothing. They’re both connected to the Synology.

I’m currently expanding the SHR2 array on my Synology after which I will upgrade that node to 24TB (it’s 18TB now).

So 24+2+2=28TB
Should be enough for a while to come. Bays on the DS3617xs are now all filled, but with future expansion I can replace smaller HDDs which I can then use in the Drobo, increasing the size of both nodes. I might also add more USB enclosures if necessary.

The Synology array is accelerated by a RAID1 of SSDs. And the databases for the Drobo and USB nodes are moved to that array. Mostly because the Drobo is notoriously slow, but it seems to be holding up with this setup.

1 Like

Thank everybody to share with us.

2 nodes 1 tb, and 30 gb used :rofl:

1 Like

slow MiB/s doesn’t mean low IOPS… :smiley:
the IOPS demand of the storagenode also seems to have gotten a lot better…

ofc i have also spent a lot of time trying also mitigate the problem, and with my new setup it doesn’t stand a chance… tho i now do have not only 3 times the raw disk IOPS but i also have 3 times the failure rate of a raidz1… ofc i can resilver at… well fast xD

it’s all fun and games until someone looses an array lol

1 Like

I have two nodes full of 8to each launched since June 2019 and two new nodes of 8to each launched since 5 days.
But honestly I don’t find this project as good, I don’t think it will be viable even later. I wait a bit and possible I stop just as many will do because between the price of electrecité, wear of the material it’s really not worth it. I think what has done a lot for many to launch new, is the false estimate of the revenue of the stroj estimator which is really unworkable. there is also the risk of a disk failure and losing the money blocked at storj not before 15 months. the critical lack of customers and so on.
honestly apart from what has a nas, which leaves his machine running 24/24? nobody except for their nodes and praying not to have any breakdown (internet, hard drive, power supply, etc …)
that’s my own personal opinion that’s all


One node on an 8CPU ARM board with 8TB storage (5TB allocated for the time being, I’ll raise it when needed according to the free space left on device).

Disk is a SATA 3.5" 8TB 7.2k RPM CMR.

1 Like

1 test node so far

1 x 4GB Pi 4
1 x 1TB USB attached drive

I’m still trying to work out the numbers (a bit like graphtek). So far to generate a return its seems you have to deploy the most basic and lower power configuration you can, which a Pi is great for. You then just take a probability risk about your node being wiped out due to a hardware or ISP fault over it’s life.


4 nodes, 14TB total hdd

1 Like

I could tell you but then …

1 Like

… I would admit that I spent way too much time and money on STORJ :smiley:


i’m certainly with you there.

Latest node is a synology that i bought pretty cheap off craigslist (i know everyone will ghasp) because I have some other purposes for it too (or at least i’m trying to think up some).

This will take the data from my raspi nodes, as they are just more trouble than they are worth at this point and have too much escrow.


yeah same here… i will say tho that i am not unhappy with picking a cheap old server even tho it does eat a bit to much power… hopefully it will make up for that in bandwidth sometime in the near future…

and the more i get into the nitty gritty of hdd storage i realize that one cannot be expected to have a storagenode for more than a couple of years max on a hdd without issues popping up …
the drives doesn’t have to break to give back bad data… so it’s really just a matter of how much bad data is acceptable… and how much will the node endure if its in the wrong spot…

i’m never going to attempt to store my data or recommend anyone non redundant storage solutions.

i’ve been copying around many many terabytes if not into the hundreds these last couple of months… and you would be horrified on just how often errors attempt to creep in… and now with zfs i catch them… and can always fix them…