1.6 pb storj node/s questions

Im building a quie scalable 60-bay home node/s using kinda overkill system and I have some questions

NOTE:HAVENT BOUGHT ANYTHING YET FEEL FREE TO CHANGE SOME PARTS OR SUGGEST BETTER STUFF

-Case: Chenbro RM43699 (60x 3.5 hot-swap bays, SAS/SATA) -Cooling Replaced stock fans with 6x Noctua NF-A14 IndustrialPPC-3000 PWM + fan hub

-Motherboard: ASRock Rack ROME8D-2T (ECC, PCIe 4.0, dual 10GbE onboard)

-CPU: AMD EPYC 7302P

RAM:256GB ECC DDR4 RDIMM IDK IF 512 necessary please lmk

Storage Drives: Seagate Exos 40X 26TB (scaling to 1.6pb up to 60 total)

Boot/Cache SSDs: 2x Intel P4510 1.6TB NVMe + StarTech U.2 to PCIe adapters (not sure if sad required it is required for fil network )

HBAs: 3x LSI 9300-8e or 9207-8e (IT mode) + SAS cables (SFF-8088/8644)

Networking: Intel i350-T2 dual 1GbE NIC (capped at 1gbs cuz it fucking cost 300 dollars for 6 public ips in middle East)

PSU: Redundant 80+ Platinum

Alr so questions 1-whats the typical required uptime do I need ups / lithium ion batteries

2-how much does the system drag in electricity and what’s the required internet bandwidth for up down I have fairly cheap electricity at 0.04 kwh but internet costs about 300 to 400 dollars for ftth 6 public ips

3-node/nodes running over read that you can only have 1 node per public up is that correct can I get away with 2 per ip

4-is it optimal to run 6x 26tb hdds per node is it better to run smaller hdds cap in multiple nodes

5-how fast does the data fill on the network 1-3 month you get 75% held, 25% earn.

4-6 month - 50-50.

7-9 month - 25-75.

10-15 100% will be paid to you for stored and traffic.

16+ month - 50% held amount will return to you, 50% will be forever held for graceful exit as guarantee of your node will remain functional.

Saw this post earlier 50 percent of first 9 months not like anything after 16 I still keep 100 right?

6- what’s the rate per tb/m how fast am I likely to fill up with current setup

7-are there any penalties for downtime

8-anyone running simmer setup here if so please share data

9- is this like region dependent

10-any advice you might want to give me

11- I intend to scale to 10 pb in future anyone in partner program atm can I get in what are the requirements

Fun fact I had to write this twice due to Reddit not letting me post

Thanks for reading :upside_down_face:

LoL. Are you trolling?

I’m glad you are building a server. How does it relate to storj? Or are you simply bragging?

About a terabyte a year. With plateau around 10, when uploads balance out deletions.

Answers to all your other questions are on this forum and node operator ToS.

Understand the purpose of the project. Read whitepaper, blogs, documentation, and forum.

2 Likes

Lol we recently needed a 350 tb ish worth of storage and we likely wold need to expand more in the future i figured i might dedicate some part of the storage capacity to the storj network fill up rate for data is pretty dam low according to you (even if i run 6 public ips for nodes) i think i might try to apply to partner program probably)

Please do link some if you have them

Trying to do so atm
Thanks for passing by

all the answers to your questions are here: Storj Forum Answers

Hello @Tokenomics,
Welcome to the forum!

Why do not store your data on Storj? If you need a filesystem like access - you may use

Regarding hardware - absolutely doesn’t matter, the node can work on a OpenWRT Router, so any server would be good too: Running node on OpenWRT router?, the usage depends exclusively on customers, not your hardware or software options, except some edge cases described there: Step 1. Understand Prerequisites - Storj Docs.

1 Like

6 IPs..oh wow! heh, heh … ok well you’d fill that in about 30,000 years then (<-- yes that’s not sarcasm), all the while violating TOS, lol… - good luck
What you should probably do is get SOC2 certified (if not already), and then contact Storj Select; however do note, primary demand is USA based.

2 cents,
Julio

Not exactly true. We also have Select in AP and EU.
But they need to be a DC or their representative, NOC 24x7 and so on.. Definitely not individuals.

Are you aware of the total amount of customer data stored on the Storj network?
Hint: https://storjstats.info/d/storj/storj-network-statistics

Don’t do this setup just for Storj.

2 Likes

Isn’t the RM43699 the 100-bay model?

Definately go for the 9300’s (or newer) as you want SAS3/12G

As others said the network size has been flat for months now: but it’s somewhat intentional. If your six IPs are all in different /24’s… you can probably still get 1-3TB/IP/year. Remember there are already almost 28000 other nodes on the network competing to use their space too.

For short-term: no - other than your system isn’t available to be sent more data, or paid to deliver it. If it drops below 60%-uptime (measured over 30 days) it will be suspended: and no further data will be sent for the node to store until uptime improves. If it’s offline too long it can be permanently disqualified.

Check out TheVan’s setup: he has links that describe his hardware. It looks like he has almost 650TB filled now.

It’s good that you have other reasons to keep all that space online: as Storj will not pay to run it alone. As I think you probably won’t even fill one HDD per year.

You should definately look at the Select/Commercial-Node program. If Storj needs capacity in your geography it could help you fill faster (if there’s paid customer uploads - since it avoids the /24 restrictions).

KINDA overkill. Cracks me up!

The main thing is that storj is just never gonna fill up that system, let alone even one disk.

Storj limits ingress to each /24 IP range. So if you have 6 static ip’s, but they are all in the same /24, then it will be pretty much the same as if you had only one IP address for every node.

The advice remains to run one node software per drive. Each node software in docker takes under 1GB of RAM in normal operation. Sometimes RAM can balloon if there is a problem with the storage backend going non available or lagging.

the chassis you describe is meant to hold the motherboard, but it’s not a regular ATX or EATX size like the asrock rack board. Also i’m unsure why you’re talking about external SAS HBA’s when you’d probably want internal.