Storage Node Operator on a 1 Gbps WAN link with Raspberry pi 4

Hello everyone…

I am from India , I have done a brief amount of reading about the project and i am super excited to join as a storage node operator to begin with.

I am planning to run as SNO on a Raspberry Pi 4 8 GB RAM with storage from an externally connected 2 TB SSD drive via the USB 3.0 ports. I have bandwidth which can go upto 1 Gbps both upstream and downstream. I will be connecting the Pi using its gigabit ethernet port using a Cat 7 cable. Router provided by my ISP does support Gigabit ethernet on its LAN interfaces. I believe i should be good with regards to meeting the hardware requirements.

Appreciate if someone can let me know if i am good to go with this. If not then do let me know what should be done from my end.

Thanks
Jatin

2 Likes

Hello @jatindavey and welcome :slight_smile:

You’re more than fine with your setup, it’s even kind of overkill on certain aspects (bandwidth and SSD).

You may want to copy the following estimator into tour Google account to fill in your numbers and see what kind of bandwidth and storage space you can expect in the coming months:

Hello @jatindavey ,
Welcome to the forum!

Almost any hardware is suitable to run storagenode - it’s not used so much when our customers uses the network. We have nodes even on router: Running node on OpenWRT router?
The recommended hardware requirements you can see there: Prerequisites - Node Operator
The setup instruction for Raspberry you can read there: Install storagenode on Raspberry Pi3 or higher – Storj
Unlike mining there is no predictable income or constant traffic, any customers’ behavior is normal.
Just make sure that you do not use a SMR HDDs, they proven to be slow and problematic: PSA: Beware of HDD manufacturers submarining SMR technology in HDD's without any public mention

All terms regarding traffic flow in our documentation and software is from the customers’ point of view - upload to the network is an ingress to your node, download from the network is an egress from your node.
By default our software uses ISO for measure items, i.e. TB it’s a Terabyte in base of 10 (unlike TiB which is base of 2).

Each new storagenode must be vetted, while in vetting it can receive only 5% of customers’ traffic until got vetted. To be vetted on one satellite the node should pass 100 audits from the satellite. For the one node it should take at least a month.
You can expand you storage later by either replacing a disk to the larger disk and migrate all data to it or adding a new node. All nodes behind the same /24 subnet of public IPs are treated as a one node for uploads (ingress), and as a separate ones for egress traffic and audits.

You will be paid $1.5/TB of used space, $20/TB for egress traffic to the customers, $10/TB for egress audit and repair traffic. The ingress is not paid, because you paid for storage.

2 Likes

Hey @Pac

Appreciate the quick response.

Will be on the network in a week or two :slight_smile:

Not looking at earnings for now. Just excited to be part of the network :slight_smile:

3 Likes

Got it !! , Will be part of the storj network in a week or two :slight_smile:

Hi @Alexey

I did some reading on HDD – CMR Vs SMR after your post.
In addition to the SSD’s i have a spare Seagate HDD. You can find more on the disk from below amazon link mentioned below:

From Seagate:

I can see that the above HDD uses CMR.

Are these HDD’s fine to operate for storage on the storage node ?

Thanks
Jatin

We cannot give you recommendation regarding purchase, contrary we recommend to use what you have now and do not invest in anything with purpose only for Storj - you likely will not have a ROI anytime soon.

I can only say, that CMR should be ok.
Personally I do not like Seagate (have had problems with them in the past), so maybe wait for other Community members to confirm.

Ironwolf is great for a node. I’m using a few ironwolfs myself. You’re not going to be using the NAS specific features though. But if you have it lying around, might as well put it to use.

@Alexey I think @jatindavey already had the drive and just added the amazon link for info.

3 Likes

Thank you @Alexey , @BrightSilence

I intend to use ubuntu 20.04 LTS (64-bit) as a storage node.
Is there any recommendation on the filesystem to use on the storage disks. I read in a few forums that ZFS is a good pick. Hence wanted to check with the community on the filesystem to use based on storj node workload.

Thanks
Jatin

I haven’t tried it myself, but ZFS seems to work well if you want to use specific ZFS features (many in the forums are using it). Otherwise ext4 is the optimal “basic” file system for use on linux.

1 Like

Thanks @baker
Will go with ext4 for now.

1 Like

Hello and welcome. I just got started recently myself. You may find some of the answers that came in on my own first post helpful:

Finally managed to set things up , I have setup the node on a Raspberry pi 4 and a Seagate Ironwolf 4 TB HDD. Installed Ubuntu 20.04.02 LTS 64-bit

I can see that in the last 48 hours , around 25 GB bandwidth has been used. I have a few questions though.

  1. I tried installing the watchtower but looks like for arm architecture it does not work. I got the below error when starting the docker container

WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

So , i will have to rely on manual updates i guess. Is there any way in which i can get notified of a new version either in the dashboard or via an email from storj ?

And am i all good with respect to becoming a Storage Node Operator ? Let me know if i am missing anything.

  1. I can understand that i am still in the vetting stage. I would like to know if there is any way i can see the progress of my vetting. Something like percentage vetting completed , percentage vetting remaining or any other metric ?

  2. The vetting seems to using very low bandwidth (25 GB in 48 hours). Is this something expected ? Will the vetting start using more bandwidth as it progresses ? Appreciate if i can get some insights into how the vetting is done. Want to make sure if things are fine from my end.

If the watchtower is running, it should update the storagenode anyway.

Please, use this calculator to see a vetting progress: Earnings calculator (Update 2023-12-05: v13.1.0 - Now with support for different payouts per satellite - Detailed earnings info and health status of your node, including vetting progress)

While your node in vetting process, It can receive only 5% of customers’ traffic until got vetted. To be vetted on one satellite the node should pass 100 audits from it. For the one node it should take at least a month.

If you want to prognose how long it may take to fill up your disk and how much you may earn, you can use Realistic earnings estimator

Thanks @Alexey

With regards to your reply for my first question:

Yes , watchtower does get to running state after i deploy the container. So should i just ignore the warning and let the watchtower container running for receiving the updates for the storagenode container ?

Thanks for the other replies. I will read into the links and get back.

Yes. We have reports from other users, that it’s functioning normally regardless this warning.
Please, report back if it’s not the case anymore.

Sure , Let me run it and report back if there are any issues.

On that note , When we have a new version of the storagenode , Do we get email notification of the new version availability ?

No, we do not.
We do not use provided email too much to do not spam our fellow Storage Node Operators. So, you would receive only warnings when your node is suspended and/or disqualified.

For the monitoring of docker hub you can use an external tools or services.

Our watchtower will check it automatically with random intervals (between 12 and 72 hours), download and install a new version when it would be available.
The random interval is a workaround until the normal updater would be implemented for docker version too.
The whole goal is to do not shutdown the entire network if we accidentally introduce a bug, so we will have time to fix it and rollout a fix.
So, please, do not use a manual updates, use a provided tool instead. You do not need to invent a wheel.

1 Like

Sure. Makes sense !!

1 Like

Its almost 147 hours since my node is up and running. I was reading on how we can add multiple nodes on the same public ip address and i realized that i have not added port forwarding for UDP port: 28967. I only added for TCP port: 28967. I have added the port forwarding rule for UDP now.

In addition to it , i have also completed the configuration mentioned here:

My score for Suspension , Audit and Online in all the satellites is 100% so far. After making the above changes for UDP , Should i restart the storagenode docker container ?