Bandwidth utilization comparison thread

well lets do the math of that then… seems to take like 260-270watts atm… tho i did add a gfx card recently, but i also changed some power management… so may have improved a bit and then added the more cost from the gfx card… but never got it to work correctly… so not sure what i’m going to do about that…

my fans are like 60-70 watts so 180-210watts of my power is most likely because they are running about full speed, because of some way its hooked up which i haven’t gotten around to fix… might look at it yet again next time i reboot the server… would be nice to cut down my overhead… just a bit …

but still

i got 60 TB in the server… x 20-30$ or so pr TB so thats 1200-1800$
in harddrives and the server uses 270watts…and 9000 hours in a year, so 2430Kwatts and then at 0.33$ pr Kwh so 1/3 so 810$ in a year… granted it is a significant amount… but if i could just fix the fan issue… then i might drop that to close to half… and if i had a different case i could use larger fans and thus again bring it down and also reduce my noise level maybe…

if i could half the server cost, then it would take years for the cost of electricity to reach the price of the hdd’s, ofc another option is simply to give the server more work :smiley:

but i suppose i should get around to looking at my overhead costs… ofc if i reduce my cooling i end up having less ability to run more drives later on the same cpu’s… and my disks may overheat…

not really have much knowledge on fan power usages or how to do efficient air based cooiling systems, but might be something i should look into…
i doubt data centers that used this stuff in the past would have wanted stuff that was inefficient…
ofc they would look at some vastly different numbers and scales…

You have realy power hungry setup, my 115TB eat 320w and this is include 2 UPS devises.

3 Likes

dual cpu, and still like 2/3 are just my crazy fans that don’t want to adjust their damn speed…
ofc i doubt it helps that the mobo also have quad nic ports on dual controllers, 2x hba’s + large onboard sata controller with 6 ports and an additional 4 SAS ports onboard… i mean it’s basically two computers worth of gear in one box,

afaik the mobo has redundancy so that if configured correctly or aimed towards redundancy then anything can break without it impacting uptime…

pretty costly to run it like that tho… costs like half the computational power, 1/3 the memory and decreases overall cpu bandwidth / pciE bandwidth, kinda cool tho… when literally anything can break and it won’t even stall or have to reboot… and then on top … its OLD and by computer standards i mean antique… 10 years now…

1 Like

how do you even fill 115TB
the delete ratio to stored should make your storagenode stop growing way before that…
ofc nobody really knows yet what the max node size is pr IP/24 yet… and it might also change as the traffic patterns change… but at like 5% deletion rates and 2-3 TB ingress a month then a max node would be 40-60TB

ofc like my server has 60TB but my useful storj space is only 24tb, because of redundancy and drive’s used for other tasks… 12 TB ended in a mirror because they where sas drives and didn’t want to play nice with the sata drives in the same vdev/pool

I have 25 separate nodes and 2 isp conections, now it on 63tb fill

2 Likes

That amount of data sounds like it would need about 4 IP’s though, not 2.

well i’ve seen decent growth… 5 months barely now and i am up to 13TB so if thats an avg then its like 2.6TB pr month, i forget what the deletion rate is but i think we usually use 5%
so thats a multiple of 20 to 100% ergo 52TB max size per node at present ingress…

I don’t understand the whole IP thing… i mean how many are we even allowed to have…
we cannot even get a straight answer…
and if we have many IP’s what are the system requirements, since that the tardigrade storage is piece based… and only a certain max number of pieces may be lost at one time for recovery to be possible…

then if we imagine 5% of all SNO’s had 10 IP/24 each… then those 5% would take up 1/3 of all the data on the network…

and most of them wouldn’t have separate systems, thus the data integrity is compromised…
because the whole idea of the distributed redundant security depends on the data being in multiple locations…

diference can be also in locations, also how old is your node? I have some difference in node even on same IP, what to tell in differen contries.

That hasn’t been the average. In the earlier months after the last wipe, ingress was much lower. You’ve started right when ingress started to get much higher. Nodes that have been online and vetted since before the last wipe will have just under 16TB stored atm. Given that they don’t have significant limitations that make them get much less data than average. This happens to be exactly 1/4th of what Vadim has, so that’s why I said it would require 4 IP’s. Technically 4 subnets.

Slightly higher is possible perhaps, but double I’m very skeptical about. Data seems fairly consistently spread across nodes that perform well. And my node has always been close to the testing sources, 99.9%+ online and has been running since march last year. Well before the last network wipe.

yeah there was a really good push of ingress shortly after i started my V3, and i had some downtime and instability issues at first which would put me slightly lower than peak… and ofc that i joined after the wipe… not sure how long exactly… i think my node was created on the 8th of march…

but yeah you are right… does seem kinda high… ofc it also depends on if the entire network was really wiped, i got no clue about that… i was just trying to point out that with 2 ip/24 the storage node farm wouldn’t be able to grow in capacity, atleast at current numbers…

can you remember what the deletion avg % a month is?

@Vadim
my node is just about 5 months… the individual nodes doesn’t count… the amount of data stored on each IP/24 will be highly accurate tho … from our testing down to 1% deviation

My data cant be mesured as normal, I have some nodes mooved to here from other locations at work, and at some point i was tested proxys(it is expencive in the end of all) some of data collected diferent ways.

2 Likes

My latest numbers suggest a 50TB theoretical maximum. But I wouldn’t consider that a hard number at all. This is based on average of 2.5TB ingress and about 5% of data being deleted every month. That seems like fair amounts in recent months, but it fluctuates a lot.

@Vadim, that explains it. I’ve actually seen really consistent numbers when comparing to other reports. So it would be surprising if you got so much more on just 2 IP’s.

yeah that may make it difficult, no clue about what happens in such situations, but if the deletion rate is 5% and you got two ip/24 then you should most likely expect the collective data stored to maybe shrink over time… but who knows what the numbers will be tomorrow…

Hrmm, I should actually account for the deletes as well. It’s probably closer to 3TB ingress and 5% deletes, so that would get you up to around 60TB per node.

So if you would move multiple nodes with more than 60TB total over to a single subnet, yeah you may see it decrease on average. But I don’t think @Vadim’s nodes are close to that limit just yet. And since his total space is 115TB, he likely would never run into that with 2 IP’s. It’s actually pretty ideally scaled for 2 IP’s if traffic patterns remain similar. (Which they probably won’t)

1 Like

dont forget about repear trafic it is also 3-5 gb a day looks like not spread like ingres.
It is good question to Devs, does the repear ingres spreaded like usual ingres or there is no /24 filters, for me looks lik it not have it.

I have around 300GB repair ingress equally spread across 3 nodes this month. All nodes are on the same IP. That’ll give you a chance to compare. But my guess would be this is also about equally spread among nodes. Repair egress is of course a different story. The numbers I previously mentioned include repair btw.

@Mark
Mine will also make around 5$… LOL with almost two times more data stored :sleepy: and 75% in held amount :sob:

Date IngressT EgressT [GB] StoredT [GB] egress ‰ egressT EgressT [kb/s /TB]
25.07.2020 9.40 3.67 1 842 1.99 42.49 23.07
26.07.2020 14.43 4.00 1 850 2.16 46.35 25.05
27.07.2020 13.41 4.03 1 870 2.15 46.60 24.92
28.07.2020 10.02 4.12 1 880 2.19 47.71 25.38
29.07.2020 9.85 4.40 1 890 2.33 50.91 26.94
30.07.2020 11.60 4.36 1 910 2.28 50.41 26.40

Egress still ~26

I see. My node is about 1 year old and doesn’t hold back earnings. Just a matter of time for you.

1 Like

@BrightSilence @Vadim we have been adding repair and normal ingress together to get accurate numbers… when we add both and all nodes ingress in each subnet, we all get the same number usually so long as everything is running smoothly ofc, isn’t full and what not… obviously :smiley:

thats how the numbers come with 1% only when repair is added into the mix… duno why

Are those 5% a measured avg on multiple ip/24 or just what you have personally measured…
because the deletion number is a big factor in the result… like if it was just 4% then it would increase a node max size by like 25%