Bandwidth utilization comparison thread

A post was merged into an existing topic: Zfs discussions

Another pretty solid day:
image

It was to the point where I thought I might end up with Egress that was more than double the running Ingress for the month between my two nodes but these past few days have very quickly made Ingress catch up (excuse the void on the left, just rebuilt my prometheus):
image

2 Likes

yeah 40gb added to my new node that finished vetting 5 or so days ago
thats a 20% increase in stored data :smiley: very good day indeed…

sadly it’s test data… not that it matters that much
just thinking it can’t be sustainable for storj labs long term… to be paying most of what we earn

Ingress has been good the last few days.

I disagree. My ingress has been from asia-east, us-central and europe-west - none of which are test satellites.

Payouts per month are on average between 200-300k storj tokens (http://storjnet.info/). Currently Storj have over 214 million tokens under rolling timelock (STORJ Token Transfers); that’s the equivalent of 59 years of current monthly payouts to SNO’s. I think they could keep going ‘long term’ if they chose to.

3 Likes

well we have seen drops in data across all satellites when storj was doing stuff that disrupted test data, and i would think one would need to test all the satellites… are we actually sure that there are dedicated test sats and that the others don’t do any test data…
seems like an odd time… ofc i guess if companies are moving and such this would be a good til to have large data transfers running

without customers nobody would want the tokens… but yeah i suppose the main part isn’t SNO’s, but their other infrastructure, still cash flow is needed for any company i think SNO expenses are like 1/20th of their expenses, so they could keep going for maybe 3 years… or a bit less…
60 years sounds very unrealistic, even if in some sense true

Storj has said so. However, it looks to me like all satellites are used for some test data, but the “test” satellites (europe-north and saltlake) are used exclusively for tests, no customer data there.

1 Like

Currently on month 3 with my Storj node. This past week I’ve been getting tons of usage ingress but I suspect it’s largely test data as I’m getting almost exactly 10-11GB usage ingress or occasionally more from the same 3 satellites each day.

Internet speed: 250 Mbit/s up/down

Ingress dropped today:

I noticed the same.

Perfect timiing, am in the middle of a huge data migration anyways lower load is more better for the moment.

Things did kinda slow down over the past two days:

Yea, things were going great(read low but stable) until about 2000 EST 2020-12-21 where the deletes started coming in:

1 Like

was looking good last night / this morning… and then PfSense gave out, again, again…

seems to want to run for 1-2 days and then the routing just stops working.
so yet another broadside of improvements hoping something will hit it’s mark, maybe i should check if i got ipaccess and it’s just the dns resolver or something…
kinda figured it was the dhcp server, but that was a bust

not sure what’s actually going on with that… and my paravirtualized nics doesn’t work
but that’s a driver issue so that i can understand, even tho they seem to load and work… sort of like the same thing that happens, everything seems to work from both sides… but PfSense just cannot route

i could suspect it has to do with i’m using vmbr vlan’s and vlan tagged nics from different layers… but one of the emulated nics has to get a tag… for some reason i just cannot make a vmbr for that particular vlan, without the server getting it’s wires in a twist.

and then there is the pfsense initial assignment of interfaces from the boot up console…
that is so bad, so, so bad … either i’m having more pfsense virtulization weirdness…

1 Like

Just pulling at straws here, but does the unstabeness that storjnet.info saw have maybe something to do with the amount of repair the past few days?


While I’m not complaining about all the repair egress that’s padding the bottom line of my nodes, I’m just hoping to put 2 & 2 together on this one as much as I can:
image

While I use virtualized pfSense at that single node Ceph site I’ve referenced before, it does not run “great.” It routes for the VM’s, and that’s about it- proxmox itself can not perform an apt-get update even. So… it would greatly behoove of you to get a separate appliance (pfSense built or otherwise) running it. Your headaches will become significantly less.

1 Like

think i got it nailed down was going over some stats and noticed that the memory usage was particularly high like 99% at the time when pfsense stopped working…

then i went back and checked when i could see the bandwidth graphs dropped during the last week, and also seemed to be at a max peak of memory usage… so seems likely that the system simply “ran out of memory” even tho pfsense should have had free memory allocated… duno what happens exactly when the system runs totally out… shouldn’t really be possible because the zfs arc should just drop allocated data… but i guess that didn’t work in that case.

kinda weird but sort of makes sense… the whole memory flushing around… haven’t really had to deal with proxmox memory issues until now… so i guess i’ve been pretty lucky thus far…

my virtualized pfsense seems to run great, i didn’t have high hopes for it… and was kinda thinking i would end up having a dedicated system for it… but if it runs like it does now and if memory is the only issue, then i don’t see this changing any time soon…

but has been painful to get to work lol

spun up my l2arc again… and now with drivers that should keep working after next kernel update :smiley: didn’t even know that was a thing… else maybe dial back the ARC max, try to make ballon ram on vm’s work better… or simply buy more ram or other ram. 48gb only goes so far :smiley:

only problem with more ram is that it will lower my ram frequency, which i don’t really want… only got like 4 or 6 slots left, but the l2arc is like 1tb… so usually after some extended uptime, stuff kinda goes there when it isn’t used long term and the ram stabilize around the 80% mark… but i guess without the l2arc that doesn’t really work, because it would have to load the data from the system… maybe thats why it crashed… i run no swap… and my pool had a l2arc configured but it had failed…

because i’ve never seen memory issues before… and it was also one of my main reasons for getting such a massive l2arc… because it was to be a fill in for memory since i cannot cheaply or easily go past my current level without heavy performance penalties.

been a while since i’ve seen a 1 to 1 repair to regular ingress.
not since the 7th… interesting enough that might be around the time i really started tinkering with my new internet… so my numbers might not be that useful here…

high latency on the satellite could also be due to new workloads or enterprise backups /migrations being started when people started to go into lockdown or whatever we are calling it

I’ve been using proxmox for my personal projects for years now (We still use a lot of ESXi for important things).

I absolutely love pfsense and have been running it both on proxmox clusters as well as on old equipment for my home network for years. I haven’t had any issues with running it as a virtual machine (even with HA) within proxmox.

On my main clusters I have around 64 public IPs, so those are on vlan0. Then I create a virtual switch on vlan10, setup a dhcp server on this in pfsense, then when i create a new virtual machine on any of the proxmox nodes in the cluster (be sure to tag it to vlan10), it auto pulls a dhcp. If i need i can map ports from public IPs (or even a 1:1 public IP to private IP mapping if desired (rarely). To be clear the pfsense VM must have a minimum of 2 NIC, 1 being on vlan0 and 1 being on vlan10 connected to the switch.

I also use the same pfsense router to run HAProxy in reverse mode to handle SSL offloading for the various services that the virtual machines offer. This allows me to manage all of my SSL certs in one centralized place, reducing the installation/configuration required on each additional server.

1 Like

i still have one of my vlan tags done by PfSense… been trying to get rid of it… but haven’t had much luck with that… i mean i can easily move it over, but for some odd reason the vlan switch will not see it…

the router i was working with before was terrible… trying to route ports in it was a gamble, spent hours trying to just do basic port mappings, which it would continually not save without any real reason…
so very happy to have finally moved to something like PfSense, giving me so much more control of the routing behavior and configurations.

@joesmoe
my pfsense has two virtual nics connected to the same vmbr but on different vlans, did find some guides to running it over one nic… but that’s most likely also just because it uses two virtual nics… didn’t seem to happy about me trying to use one at one point :smiley:

i decided to start using the PfSense dhcp giving out some ip addresses based on MAC addresses, just because i got some computer or vm’s that may at times be reinstalled, or offline for extended periods and i kinda want them to use the same ipaddress… and ran into an issue with an ubuntu server i had installed, where i just couldn’t figure out how i managed to make it keep one specific ip address… i could change it for a time, until a reboot… then it would always go back… tried like 10 different methods, even tho after a while i realized there seemed to be only like 4 fundamental methods that just looked a bit different… was weird…
and got really annoyed, so i just decided not to configure a static ip on any of them anymore and just keep them all listed in PfSense.

not sure if that’s a good approach… seemed like a good idea at the time and thus far :smiley:
and it’s nice when it’s in one place.

Just use Linux as a router - while it doesn’t have a web ui, it works great (as a VM or a physical server) and is very customizable :slight_smile:

1 Like

This is precisely what i do with pfsense. Also note - your DHCP server can assign public IPs as well (though i rarely use this feature).

Way easier to configure the static maps in pfsense versus having to do it in various OS’s and potentially multiple times upon reformats and such - completely agree.

1 Like

If you want to be hard core about it, my favorite router is openbsd. It can run on very minimal hardware, can handle an entire university’s worth of traffic, and just in general is very secure.

Where PFSense on the cluster comes in nice is when you need multiple people managing the router (i.e. adding a dns entry or a ssl cert). With obsd and pf, i’m afraid someone will make a mistake.

1 Like

Two notes-

L2Arc takes some memory too, so that’s a factor you may need to attribute for.

I prefer to run my pfsense instances without any plugins aside from either zabbix or something to export to influxDB/prometheus. BandwidthD, netdata, ntopng, etc… they all have to touch the disk to save things and they build aggregations some times that may be taking RAM or CPU cycles away from routing- so I do all those, essentially, meta aggregations elsewhere to prevent from bogging the devices primary function. I’ve seen SG-3100’s reboot due to watchdog and it turns out it was just bandwidthD or suracatta chugging down and bottlenecking the system to a point the watchdog timer expired and rebooted it.

1 Like

yeah i completely agree running other stuff in pfsense is most likely just a bad idea… atleast in many cases… but i will assume most of the sensible features it has is a safe bet…

and in regard to the memory… granted the server hasn’t been running for nearly as long… but to just kick it a bit extra, i spun up a couple of windows vm’s on top, after i put the l2arc back online.

now it’s at 91.70% and still dropping… yes i know l2arc will require some allocation to keep track of it… but from what i’ve seen thus far it’s not a problem… had it up to just shy of 1tb and running for weeks… and ofc took like 6-8 weeks to get there…

but i really like it and it being a fast ssd at many things… not perfect in any aspect but good in all aspects… so when it’s really been saturated over a few months, everything that has been run will still be there… and be snappy as on an ssd… sure something minute might be needed from the hdd’s but 99% of the time when it’s vm’s, scrubs, programs, browsers, streaming content, iso’s… even the filewalker or storj databases end up in the l2arc…

and if something needs memory, the memory just drops it immediately because the data is already in the l2arc and if you change what you are doing… it can reload the entire allowed memory capacity with it immediately…

thus far it seems to work great… it was also why i kinda just let it be crashed for so long… to get a sense of what i was getting out of it… and i can say i did notice a lot of stuff being affected by my l2arc being down.

sure it’s not a perfect solution, but its much cheaper than tons of ram… and seem to get kinda close…

ofc much less will most likely do nearly as well… but figured it would be a fun experiment.
and with such a huge l2arc it will have a memory like an elephant.
if it’s been used this year… it’s most likely there :smiley: atleast for my tiny system

and now in a few months persistent l2arc… which should be a game changer