Bandwidth utilization comparison thread

yes node per a row without repair

It is very good stats, if you put all together it will be some TB this month. and over 200GB egress in 1 day

this big speeds started some day ago only. Best month was January very big Egress, and doble payouts

would you see in january :slight_smile:

(on 202006 there is GE payouts)

1 Like

So cool! so many nodes !
Is each node connected to separate public IP?
@Krey could you post how much egress vs stored data you have per node?

if something falls into my area of interest, I prefer to take it seriously

at most. there is 7 different locations

ok i replace a picture… done.
I stuck with most nodes last months so sleeping most of big ingress.

I’m even a little ashamed in front of my comrades, most His nodes grew above 10tib

1 Like

i duno how to read that… but if that is from 1 day… the you would have to use multiple ip’s atleast from how i understand the numbers i’m seeing… using multiple subnets might not be allowed according to the Term of Service or whatever it’s called… but i duno… just saying…

but the ToS might be updated again soon… its current unknown if it will allow or disallow the usage of multiple ip’s… atleast from the same locations… i think… i duno… so much stuff to learn about all of this.

with that many drives how are you doing with disk failures?
do they tend to die before a certain age… or how old is the oldest storagenode?

i kinda have this view that hdd’s are to unreliable so i run with redundancy myself…

I wrote many times about this. /24 filter is a bad design descission. Decentralization Can achived many others different ways. This descition make my service low quality than it can be achive if i reserve channels rather multiple tunnels.
This rule work for newbies. Not for pro it engineers. Who actually keep Core network now.
Look at repair. It ugly.

1 Like

sure ain’t an elegant solution to it… but if all one needs is a patch for a test setup until time comes a long to build something better… also i’m sure lots of stuff changed along the way and didn’t get updated… they may not be well versed in network practice… even tho it gives me shivers to say that lol

in my setup i do not use Raid at all, I use almost all HDDs are 24/7 version , like WD perple, or datacenter edition for new Hdds. So far all HDDs live. Also thay are only 6.4W Max

I run some numbers on few of your nodes - the KB/s per TB stored are mostly consistent with ours.
You get more egress because of more nodes, public IPs or your tricks for the public IP rule :smiley:

Anyway awesome to meet a guy/gal who has the power of “many nodes” :smiley:
image

4 drives grow relocates, i replace it.
How many? I stopped count. It add or removes too frequently. More than 50.
First node 16 months old.

my numbers are blurred due to terabytes node migrations.

I know how to cheat the system very well, but it is very EXPENCIVE and work only if there is big Ingress, Egress and lot of free space. As we never know the trafic, it can be very unprofitable also. Reconfiguration takes time. And it will blow my router, need more powerfull then.

Today there is lot of people write about unprofitable. And show about Below 1 TB nodes, Profit Start on 4TB+ i think and need wait some months also, at least 50/50 heald amount. Most problem that test trafick go from New sattelite and it first 3 months on thins.

2 Likes

Agree. I didnt count for egress from storg v2.
Personally i can grow up to 200tib used at winter and up it to petabyte Next years. Only if storj alive of course.

But we need more nodes. With current profit for newbies it is hard to achive. Need more investments, actions, adds and more promo work with token itself.
Need client oriented soft.

I am wating my client soon change video servers, so i can buy cheap servers with lot of storage. then I make expansions. Taday only changing some 500-1000 GB nodes for 8 TB

yeah 1tb nodes are in almost all cases… aside from maybe on a RPI in a country with cheap electricity… basically unprofitable… or was it 500gb… may have been 500gb anyways… not much profit at that level especially when one needs to account for overhead…

i know my overhead is kinda horrible… but i decided i wanted a proper server… not a tinker toy
but atleast i can expand beyond reasonable limits for the next few years…

not sure if going solar or more harddrives tho… ofc at present speeds it will still take ages for my current capacity to fill… i think shoofar did the math and got it to be 4months…

and thats the 24tb mark… then i will ofc just expand it again… tho i would hate to add more overhead… but kinda also want to get a Disk shelf… but i may start buying larger disk for the server hdd bays first…

would make sense to stack up some reserve drives to be able to quickly respond to insane peak ingress, or just have some spares… and i suppose that is sort of where one gets into the whole data center mindset… HOW MUCH DATADENSITY CAN I GET… O:O
muahahahh i could have 6 drives worth of capacity in the server… so thats like close to 100tb… ofc i would need like 150tb worth of drives because of my redundancy… but atleast its one mean pool that can do some tricks… xD

currently i got 4 3tb left the rest are 6tb… got like 60tb worth… but ran into hardware issues because i mixed sas and sata :smiley: so 2x 6tb i cannot use on the pool, and the 3tb’s i’m phasing out.

not sure what i will upgrade to buy most likely it will be max capacity next time because all of a sudden i will be limited by my bays… and then to get past that i need to add a DAS and thats extra overhead… so would be nice if i actually was in plus when i did that… but thats partly also why like the solar idea… atleast if storj goes bust… then i won’t have a insane amount of hdd with little use for them… atleast with solar i would just be able to use the power for something else…

but i’m sure there are other horribly paid storage projects to throw unused hdds after lol

not saying storj i horribly paid… just that it’s a long long time to wait…
the pay is actually quite decent imo

1 Like

I cant afford big proffesional servers with big density just bacase thay do lot of noice, today my servers are working very low noice.

Wait didn’t someone make the math and figured out that nodes won’t fill up over 40 TB ?
If we assume thet about 5% on the space used is deleted per month and that average ingress is 100GB per day (which seems to be the case) that would mean that at the 60 TB mark you’ll get 3 TB of ingress per month but also 3 TB worth of deletes files.
Of course that theory doesn’t work if ingress rises above 100GB per day but as a rule of thumb nodes can’t be bigger that 600x the average daily ingress (still if 5% of total used space is deleted per month).

I don’t know if there is a way to figure this out but it would be nice to get the average amount of deleted files per month (in GB). Might give us a deeper insight.