One Petabyte lying around

But with RAID6, the equivalent of two drives never stored any useful data to begin with. It would be better to put the disk space to use rather than waste it on parity.

Even if (when!) a drive later fails, the total stored amount afterwards is still higher. This assumes of course enough /24’s to actively fill all nodes.

This is true, I had a negative experience when building a RAID array in this format.

Sorry - how many nodes can you run on one IP? I’m currently running 2 on the same IP and it works fine. Want to make sure I’m not breaking any rules.

If you go from the ToS then no.

Without limiting the generality of the foregoing, you will not:
Operate more than one (1) Storage Node behind the same IP address

but if you host more storage nodes, all behind the same /24 subnet are treated as one

So theoretically, I can run as many nodes as I want, and it’s not making me earn less?

Another question, how can I check the process on the vetting of my nodes? My main idea is to get all my drives running on one RPI (I currently have 2 drives, both very new) to make it so I don’t need multiple IPs. Is this possible?

Yes, but also not more.

Yes, you can run as many nodes without making less. BUT your vetting takes longer.
Example:
If 1 node takes 1 Month to run through the vetting process, 3 nodes take 3 Months to finish the process.

I would recommend setting up one node, getting that vetted and then starting another node one the first one reaches 100%

You won’t get more storage filled if you have 5 nodes running at the same time. As long as they are in the same /24 Subnet they all get the same traffic. Thus resulting in higher power bills if one of the nodes isn’t used to full potential.

The checker for vetting process you requested:

edit:

The idea itself isn’t wrong, but if your Pi runs into an error all nodes go down, depending on the amount of storage you have, it’s probably better to split up the risk, but there are Pi’s out there running such jobs

3 Likes

Thank you for the information! I’ve got 2 running right now, and before I add any more I will wait until they’re vetted.

Yeah, that’s a good idea. Just like keeping everything in one place.

Is there a good way to get notifications if my node goes down? Maybe via email?

I personally run a docker container with uptime kuma
it’s an open source uptime tool that checkes if everything you connect is running and if not, gives you the options to get notifications via email, push notifications, telegram bot or even discord.

I bet there are better options related to storj you’ll find in the forum, I’ll have a quick look and text them in here :slight_smile:

edit: Storage Node uptime script for notifing you when your node is down (Linux)

edit 2:
maybe even better (don’t know)

3 Likes

Thank you! I just had some other questions (sorry!)

I presume running multiple-nodes distributes the traffic evenly among the nodes. Just another thing, how long does 1 audit take to happen? I’ve been running my nodes for 2 days and neither have any audits on any satellites.

The amount of audits is a linear function of how much data you have, with some tuning for new nodes to make vetting take about a month.

Leave it for a few weeks to acumulate more data, and audits will start appearing.

4 Likes

Vetting takes 100 audits / satellite I have very different numbers
In the beginning on one of my satellites I had 4 audits while other satellites had 0-1.
From what I read in the forum vetting takes around 1-2 months.

2 Likes

Thanks for the information. I’ll just leave it and check up on it every now and then.

1 Like

In the last 2 months, I started 3 nodes, and all got vetted in 3 weeks, on different /24 subnets. But only on the main 3 sats: us1, eu1 and ap1.
I don’t care about the other 3, because they traffic test data. The main 3 traffic customer data. Use the earnings.py script mentioned above, made by BS. It will show you all your progress.

If you have too many HDDs and not enough IPs, don’t waste them on RAIDs. You will pay twice the energy needed for a node. If one fails, you loose that progress, but watching them from time to time you cand anticipate the EoL of that drive and clone it to a new one. And the ingress has increased from year to year. Now is around 6TB/year. Last year was 3TB/year. The trend is very promising. Will fill your new drivers quickly.
And you have Exos drives, enterprise grade drives. Is just the best drive you can use for Storj. Don’t worry so much.
The minimum guaranteed life of them is 5 years. I saw many failed nodes on this forum but none of them because of an enterprise grade drive failure. Is a dedicated thread somewhere…
The main reasons of failure: not using UPS, human error when mantainin RAID arraways, or moving nodes, Raspberrie Pi’s bad USB connection to drives, using general purpose drives.
My advise is to try avoiding USB conections, reduce your power bill and use UPS. This is a must! Spin no more than 2 nodes per /24 subnet and wait for them to fill to 80-90%, than add more. Use as much RAM memory as you can, 16GB will be best. It helps big time with buffering and caching traffic and filesystem.
Check the threads: “Tuning the filewalker” and “Synology Diskstation memory upgrade guide” to see what I discovered from using too little memory.
And you can always use those extra drives for something else like Chia. I don’t know what projects are there and I’m not a fan of Chia. I just consider it a waste of resources and I will not buy drives for that project, but since you already got them…

They start from 165$ on ebay.

I feel like you should check the following tool (if you haven’t already) to get a better idea of what to expect behind a /24 subnet:

Just so you know :slight_smile:

2 Likes