Starting a new node, worth it in 2024? Also some setup questions

Hi! I am about to upgrade my home NAS and backup NAS with a lot more storage and my old drives are going to be of little use. I am thinking of using them for a Storj node. The node would start out at around 9TB but will likely grow as I have more old drives in storage (pun intended). My questions:

  1. Is it worth investing a bit into the hardware to get this up and running. I would need something to hook the drives up to and it would likely cost me $100 or so.

  2. How memory and CPU intensive are the nodes? How about network bandwidth?

  3. Obviously having more storage capacity is better, but creating something like a raidz would be more reliable. If a drive in a node fails, is it a big deal? Do you get penalized for it, or is it routine?

  4. I have seen the payout calculators and know this is not a huge money making scheme but is it likely that I’ll just lose a ton of money running a node with no return at all?

I have been running various storage systems for a couple of decades and am a software developer so I am very comfortable setting something like this up but have zero experience with Storj so asking y’all for your experiences and advice. Thank you!

It all depends upon customer demand on the network. I am making about $10/month with three nodes, totaling 35 TiB capacity.

2 Likes

Personally I would say to make your own research and make your own conclusion. Some people see it worth investing into, while other disencourage it.

Hosting a node is very light on the CPU. For memory, usually the more the better. Historically having low amounts of memory could be a major bottleneck, but with the latest developments I believe it can run fine in some memory light systems.

Regarding network bandwidth, the usage of a single /24 range (how data is distributed among nodes) is rather low, usually, in my experience, can range from a few megabits/s to a few tens of megabits/s. This value will be affected by a lot of factors: Number of nodes in the /24 subnet. Customer activity. Location with respect to customer. And probably more.

It is not recommended to run raid for Storj, as it has redundancy already built into the network. In the case of a node failure, you will lose whatever held amount is left in the node. Other than that, your are not penalized in any other way. As a SNO, its better to host 2 nodes for the full capacity (and therefore potentially being paid for all space) than 1 node using a raid (mirror) for half the capacity (and limiting your payout to half the space).

The amount of money you can lose depends on the investment you make. The payout calculators are mostly based on expected storage growth, which can be hard to predict.

Because of this, I would not recommend to host a node if you are not expecting to keep it online for a long time, at least 15 months for example (when you get 50% of the held amount back).

At the beginnig, your payout will be close to nothing as your drives are mostly empty. With time, they will start to fill up. How much they will fill up and how long it will take will depend on customer activity, which once again, is hard to predict. For example, I got a 8TB node that is 9 months old, that has a bit over 4TB used.

What I personally recommend is to spin up a single node with whatever hardware you have available, and that way you get some exposure to how the system works, learn its caveats, and helps you get a better grasp of Storj overall.

1 Like

Amazing info, thank you! I do like the concept of Storj a lot and will probably use it to back up my NAS as a customer (offsite backups are important).

My plan is to keep the node online long term but I do wonder about single drives failing vs the whole node going offline.

Also, just to be clear is the advice to have one node with all the storage or split it up one per drive or some such given that they’ll be on the same IP address/at the same location?

Again thank you both for answering my questions!

Generally always run one node per drive. Unless you have very good reasons to do otherwise, this is the best approach. Using redundancy will reduce your total available space, while pooling mulitple drives together will increase the risk of losing all data.

You also should not run multiple nodes per drive, as this can be a mejor bottleneck (especially for HDDs, which is usually the medium used for nodes), because hosting a node is usually an IOPS intensive application.

The total data ingress will be the same regardless of if its a single big node, or multiple smaller ones. The data will just be distributed amog them. This also brings other advantages such as being more competitive in the same subnet, e.g. if there are already 2 nodes operating in your /24 subnet, then if you add one node, you would get 1/3 of the data. With 2 nodes, you will get 2/4 of the data.

As you should be running a single node per drive, single drive failing = whole node going offline (only the one node).

By doing this, you ensure that if a drive is lost, you will only be losing that drives worth of capacity, and no more.

The remaining nodes would still be fully operational. Even if you own multiple nodes, it has no repercussions on the nodes themselves, because in the network there is no such concept as a “group of nodes” or “owner of nodes”, simply every single node is its own independent entity.

1 Like

Hosting StorJ started off as a hobby of mine.
I was working as a VMware administrator at the time, and I wanted/needed a home enviornment that could mimic not only the servers I had at work, but also the workload induced by guest OSes that was actually doing something

StorJ for me was the perfect candidate - it had network activity, it had disk activity and I was free to choose OS to work on.

I used my platform to learn a lot about VMware automation as well as windows automation.

Gradually, I got all the certifications I wanted from VMware, and as I got my environment more and more right for what I wanted, focus shifted to spend more time on the StorJ aspect.

I run a few hundred nodes today, and have ~100TB hosted from all locations I’m in. Before running Storagenodes, I tried Chia, Filecoin and Burst - but none of them seemed real. It was all lottery tickets, without any real world customers - that’s what I like so much about StorJ, it’s solving a real problem for real customers today.
Does this mean that node growth is slower than on some other platforms? Yes. But I also think it means that StorJ won’t implode like Chia did.

4 Likes

less than 1TB per node though?

Regarding the initial question, you tend to make only a few bucks per disk , and they take forever to fill, so in terms of ROI you shouldn’t buy a system for storj, and arguably shouldn’t even buy a hard drive for it.

1 Like

The disks I’m thinking of using are 1-2 TB as I’m going to upgrade the NASes I have to something like 12TB drives. The things I’d need to buy potentially are something to connect the disks to. I suppose I should list some options I have and get opinions:

  1. Put the disks into USB enclosures and connect them via USB to my main NAS. Then run containers, one per disk, with each container running a Storj node. Upside is that I don’t need anything new except some really cheap USB3 enclosures. Speed would be plenty, and the NAS has plenty of computing power for this. Downside is that if my NAS goes down, so does every single one of my nodes.

  2. Get something like a Raspberry Pi 4 (or could an RPi Zero 2W work?) and attach one per disk? I’d still need a USB enclosure or at least a connector. Realistically this could be a fun project to 3D print a tiny rack for this. The upside is that my cost per node would be something like $25 each with the Zeros and $45 for the 4s. Not terrible but would definitely take a while to pay off.

  3. Same as #1 but with a Raspberry Pi 5 + SATA hat. I could do 4 drives per RPi. Same downsides as #1 and upside being that it is a separate setup.

  4. I do have RPi 1s lying around…

I have a 500 Mbps fiber connection and a solid home networks so bandwidth will be good. If I start getting some reasonable payouts I could eventually connect more drives and justify the cost to upgrade to 1Gbps fiber but I know that’ll take a few years at least.

1 Like

Very short answer…definitely no.
Here´s why: I spinned a new node over 7 months ago, passed the 50% vetting phase and still…

You do your own math :wink:
It doesn´t pay the electricity it´s running on (Raspberry Pi 5)

1 Like

Well I can confirm that this November cannot be more frustrating. Progress = zero. Average used total change = zero. Incrase in total stored bytes = zero. It is litereally almost the same values like when the month started.

2 Likes

Let’s face it:

Out of 5 nodes, 2 of them are Greacefully Exited (with over 3 years) and 32TB.
They were running on HP Microserver N36L with 4x4TB consuming 80W and getting around 9$/month…nop, not happening.
Ever since the HUGE slash on payments, profitability has been going down until it reached the pointlessness, also because the tokens had a huge jump in price, so we only get a few coins.
Back in the day, I used to get 300ish Storj monthly…when it was worth pennies :slight_smile: Those were the days :smiley:

1 Like

Either you misunderstand the point of this project, or you don’t want free money, or you don’t want new people to join so you get larger share of stored data.

Hint: Your server is already running anyway, if you don’t run the node then you pay 100% of operating costs. If you do run the node — storj offsets some.

Buying hardware to run the node was never encouraged and cannot be worth it: otherwise storj could rent or buy hardware directly, why do they need you as a middleman? The point of the project is to utilized currry wasted, unused capacity. Bringing new capacity online directly contradicts the core principle.

4 Likes

No. If you happen to have unused online capacity — use it for storj. If not — don’t.

Very little cpu usage. Very little ram requirements. However, nodes store millions of files and to work properly all metadata must ideally fit in ram. I would recommend 8-16GB of available free ram (since you seem to have zfs)

Use what you already have. If node loses enough data node is disqualified. Network will be fine either way. Since you plan this for a home NAS — I’d expect you’ll have a number of small raidz1 vdevs, and a special device. Create a new dataset for storj data, separate (child) dataset for its databases, with cranked up small block size, so all databases go to ssd.

Anything that node pays is 100% profit. Your sever is running anyway, by having spare capacity and not running the node you just waste it. Storj will hep offset the costs. It won’t pay for new hardware.

1 Like

What are those spikes starting in March with the stored customer data?

And why did the capacity drop so sharply in November?

So just hook up the drives to an existing NAS, run Storj nodes in a container, and don’t overthink it?

Don’t hook up anything extra that you don’t already need running. Run one or more nodes on your main array if you have extra space. When you need to reclaim the space for yourself — remove sufficient number of nodes.

Or directly. Depends on the OS on the NAS.

Yes, that’s the key! :slight_smile:

1 Like

Back then there were lots of glitches. Might be only display issues. Around that time they had announced that they will increase trial account deletions or move them to another satellite. Maybe that was part of the cause of it.

I don’t know. Can be anything. Glitches again or node operator actively reducing capacity.

1 Like

This can’t be right. Or do you mean you have 35TB assigned, but only about 6TB filled?

1 Like

35 TiB total available across 3 nodes.
About 7 TiB data stored now.

1 Like