Post pictures of your storagenode rig(s)

BGP Is the keyword here :+1:

1 Like

BGP eh? So you’ve have your own AS and can get away with things like that. Cheeky sucker.

2 Likes

@stefanbenten
wasn’t it the BGP that was down the other day… xD
i think, i got rack envy… lol

and 1 question… did you ever regret using 4U servers… i got a 2U one an i kinda hate that i didn’t get a 4U, for all the now obvious reasons i’m sure you are familiar with…

@sorry2xs you could also register a domain… not really very different, but it looks prettier and its a much more permanent solution if you want it to be… then how you handle the data after it hits the domain is up to you… and you could even host it locally, and i do believe there is built in failover in the domain thing.

if you want to bit more pedestrian solution… not really my most knowledgeable area but pretty sure i would do … should do something like that…

@SGC,last month I used 3.7 TB of bandwidth for one node, and band with is pricey for now I will stick to the whole manual docker stop, rm and change ip, as I don’t quite BGP and from what I have read it doesn’t appear to be cheap.but thanks anyway :wink:

BGP isn’t related to domains… domains are the fixed / owned part of internet dns.
domains are in many cases fairly affordable, if they doesn’t have to be something special, like say buying facebook.uk would be pretty expensive … most likely.

while akjsdkjashfakjhsafdbjhksf.net you can most likely get a lifetime ownership of or yearly lease for … 1$ or 5$ or whatever fairly insignificant amount.

this will allow you to get rid of or bypass the ddns solutions, because you got a name in the register… just like your current ddns has a domain… called ddnsnameforfree.net
and then you get a subdomain of theirs which is in front of the dns name… like this
sorry2xs.ddnsnameforfree.net while if you leased your own domain you could make it sorry2xs.net

it just makes it a bit more permanent and you don’t have to rely on a free service uptime… because to be fair you cannot count on a free dns … not a whole lot anyways.

i barely even know what BGP is… supposedly the backbone of the internet which apparently can crash large parts of the internet… its mainly run by 1 company that would make google, amazon and facebook tiny, even if they where combined into one…

anyways if you want to get around the ddns thing… just search the net for buying a domain or registering a domain… its pretty straightforward stuff theses days.
that would make it a paid service and thus the odds of downtime would most likely greatly decrease… or so one should hope… lol, if you pick a good host and really i’m not sure you even need that… pretty sure one can just self host, but the host you rent the name from will be the record keeper, but the dns record will populate to other servers and thus even if it does crash then your domain should still be accessible… atleast in most cases.

ofc you still might need some way to track your dynamic ip, but you should be able to use your isp’s dns name for your connection… if you can find it… but should be pretty straight forwards in their dns tables.

i suppose you could just use your isp’s dns name for your connection instead… that would completely bypass the entire rickroll… and reduce your points of failure.

BGP is a routing protocol used to get packets moved between nodes and select the next hop properly. I think the comments were related to being an AS (Autonomous System) which uses BGP to determine which ISP to use (when you multihome to more than one ISP) and to influence other ISPs on the Internet to prefer certain paths.

I have run our corporate network which is technically a ‘stub AS’ which means we do not allow traffic from one ISP to traverse us to reach another ISP. In essence we are an endpoint using several ISPs to reach us. If one ISP goes down (say ATT) then we are still reachable by other directly connected ISPs thanks to BGP.

6 Likes

Yup, it’s a routing protocol … the SatNav of a multi router network.


My most profitable ghetto server. Old HP entry level server hardware in an old Antec case. Noctua L12 heatsink tho :wink:

It features this great invention I call the “floating front panel”:

7 Likes

Now I know how Apple thought of floating store

I am a huge Noctua fan. :nerd_face:

2 Likes

So am I, actually my other PC is a Porsche…uhm my other PC is full of Noctua fans :wink:

1 Like

That is cool.
If I have my way…boot up with NVME *(now so cheap) and use the 6 SATA ports + NVME ports as one Drive *(NVME as fast cache)

I added a 2x NVME 16x PCIE Riser with Marvel 9251 16 port sata…into just one system… and yeah…the wires are like a spider mess.

I have dedicated static IP here *(16 IP, $500/mth 1Gbps GPON) and serve me well…

What’s tye bandwidth over there and pricing too?

1 Like

cool setup bro, I like this kind of low power setups. How’s the successrate?

Ha ha lol! Best front panel ever :rofl:

But… what does it do?! :slight_smile:

RockPro64 + 2x 8TB Seagate Ironwolf.

13 Likes

RPi 4B, with 16TB of storage scattered amongst 4 disks, all that stuck under a furniture…

What a mess you say? :sweat_smile:
Let me clarify this:

See? It’s not that bad! :wink:

6 Likes

Nice one!
What is the current filled capacity and what is the CPU usage running with 4 nodes on RPi4?

When writing these lines, the load average is around 0.4:

top - 07:24:18 up 6 days, 20:26,  1 user,  load average: 0.30, 0.43, 0.42
Tasks: 165 total,   1 running, 164 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.2 us,  0.4 sy,  0.0 ni, 94.4 id,  3.9 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :   3906.0 total,    423.2 free,    321.6 used,   3161.2 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   3291.8 avail Mem

But things are pretty quiet these days, I’ve seen the load average go between 1 & 2 some days. I’m not charting this though so it may have been higher at times.

I added disk 3 roughly one month ago, because disks 1 & 2 were full, and finally added disk 4 two weeks ago even though I know it’s not recommended to add more than 1 node at a time, but it’s simply going to take longer to vet them and fill them up, I’m aware of that :slight_smile: But as the power consumption is not horrendous on that setup, I thought “why not”… The whole setup has been averaging around 26.7W (ISP box excluded) for the last 15 days, that is ~1.67w/TB which sounds fine to me.

So, disks 1 & 2 are full, and disks 3 & 4 are almost empty, as expected:

pi@raspberrypi:~ $ df -H
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        16G  1.9G   13G  13% /
[...]
/dev/mmcblk0p1  265M   55M  210M  21% /boot
/dev/sda1       2.0T  1.9T  141G  93% /.../storj/mounts/disk_1
/dev/sdc1       984G  903G   81G  92% /.../storj/mounts/disk_2
/dev/sdb1       5.0T  423G  4.3T   9% /.../storj/mounts/disk_3
/dev/sdd1       8.0T   69G  7.9T   1% /.../storj/mounts/disk_4

Notes:

  • None of the newest nodes are vetted yet (varies from ~25% to ~75% vetted).
  • All disks are SMR (boooo), except for disk2 (yaaay), which holds the rotated logs for all other nodes.
  • All disks are 2.5", except for disk4 which is a standard 3.5".
  • There is another reason that pushed me to plug disk4 early: I had issues with this disk in the past, so I switched it to a new enclosure and the new node it holds is kind of a guinea pig to test the disk and make sure that, as I suspect, the problems I faced in the past were caused by the enclosure. It’s too early to tell but so far so good, and all tests (cyrstaldisk, badsectors, …) passed.
    Initially, that’s the disk that lost 5% of the files of another of my nodes, currently running on disk3 (see topic Storage node to Satellite : I've lost this block, sorry - surprisingly, it seems like it will survive).
1 Like

@Pac yeah, speed’s these days are low, but i believe/hope, that RPi4 is more then OK, for 4 drives :slight_smile: unless we will see the very high speeds and then situation might change a bit, but currenlty your setup prooves my thoughts.

p.s. what software you used for second picture, with the graphis on the top?

Well I don’t see why it couldn’t handle 4 or even more drives, as the more nodes you have, the less each one is going to get queried by the network. CPU and RAM have never been the bottleneck on my setup so far, it’s always been SMR disks, when not configured properly.

Did all the graphics by hand, took me a bit of time ^^
Have a look at https://www.photopea.com for an online and free Photoshop-like alternative.

1 Like

Can you tell me more about said configuration concerning SMR disks?

I wanted to refrain from using SMR altogether, but i have an 8TB unit laying around that I would love to put to use.

1 Like