Bandwidth utilization comparison thread

Which ones are you referring to ?

you do know half the price for customers would mean half the profit for node operators pr TB stored.
at 2$ pr TB it would be pretty difficult to make a profit, even tho possible… half the price of storj seems extremely low.

at 2$ pr stored TB would mean the TB price of a new hdd would take atleast in my region would be a year, on top of that if one is using like say a 3tb hdd… then that is 2 watts pr tb so thats like 1.5kwh pr month. + lets say a rpi at 10 watts x 720 hour in a month … so 7.2kwh + 3tb x 1.5kwh so lets be nice and say 10kwh a month.

lets be nice and say at 20 cents pr kwh… even tho mine is significantly higher.
so for running 1 x 3 TB hdd on a RPI is about 2$ in monthly power costs…

sure you can use larger and more hdd’s and such to help offset the costs to help lower the cost in watts pr TB space offered.

so these are our recurring costs for the RPI like example.
2$ pr month and it can then earn 6$ max and will earn less for the first 6 months if not 9 months… but lets imagine the node is maxed in age and capacity used.

then you would have a profit on 3 TB pr month of 4$ and out of that you will need to pay the one time costs of the hdd and the RPI… the hdd is used lets say and so maybe 8$ pr TB and a the RPI has a value of lets say 40$ so that means it takes 12 months, running at full blast to make back what was spent on it…

however that isn’t the end of it ofc… why would it be… we did ignore the first 9 months, in these first 6 months you would almost without a doubt be working at a loss, but lets assume that it will make enough to atleast cover the power costs.
so then we are 21 months into running a small storagenode and now we can start earning towards the time spent on setting it up and maintaining it.

and then we are assuming that it’s even long term viable on systems without ECC or redundancy, but really most likely these kinds of systems will create more trouble long term, and then you have more costs and even longer to even reaching ROI

not sure much lower earnings for storagenode operators would make sense.
and i doubt many have the gear to reducing their overhead low enough to make that viable.

also we didn’t include internet costs.

ofc we can make the math much more favorable and simple… but the fact is that it’s rarely simple.

and for small node operators having a ROI that takes 3 years to reach isn’t really very viable.
sure some company may want to give new customers and advantage… but don’t fool yourself… they are in it to make money… not give out free space, but they may want to lock in a lot of customers before raising their prices and is paying the extra expenses meanwhile.

1 Like

this might be the current reason we are seeing a drop in ingress…
test satellite migrations, so no test data…

could be mostly customer data we are seeing right now…

1 Like

the way i see it having the hdd full of data pays something so its better i am running both storj and sia if i ever fill up the 120tb i will then either expand or just cut back on the one that pays less its looking like an ROI of about 2 years on the hardware if the price stays the same but given the gains sia will make as progress is made the earnings now will be worth much more then all that said i would probably make more money if just got sia and held for a while

1 Like

( prices are in GBP )

the math on mine is 35.50 / TB storage cost ( this is raid 10 ) and factors that redundancy ( 2.45/TB )

my storage cost is set at 1.50/TB fixed that is 24 months to may for themselves ( just counting the storage income ) ( it would be 12 months if not redundant )

electrical cost is 0.18/kwh
A pair of drives will hold 14.5TB and take 20w that is 2.43 total ( 0.17/TB/month)

so overall cost per month per TB if your only counting a 2 year life span is 2.62/TB/month with redundancy

obviously you then have bandwidth to factor into that cost

certainly won’t make one rich lol

running both and comparing them i think is a great idea… with the current sia prices i wouldn’t have minded getting in on it, but back when i last checked on it people was complaining about profits being basically nonexistent.

but yeah certainly branching out is always a good idea…
120tb is also a lot of space

1 Like

I am starting to do the same thing as @zackclark70 since I recently acquired additional storage I have the capacity. In addition as I’m hosting a portion of my nodes in DC it would be nice to diversify should something happen to one platform. Additionally as I take STORJ as a primary source of income for DC and SIA as secondary. Once storj node becomes larger will become close to filling the HDD I will Scale back my SIA operations should I decide to do so.

To add to

It is nice to see what actual user data is. I know STORJ will always supply test data to check the system and improve. But it is actually nice to see how big the portion of it is costumer data, but I know it varies and depends when the business do backups and data dumps. Some days I can Ingress 20-40Gb on a node and others I only am able to achieve 5GB or less so it depends.

The biggest concern for me is the raid increase of nodes. I know this isn’t mining. But it occasionally can behave like it. The more empty nodes are on the system the traffic splits across them. Comparable in increase complexity of hitting a block when mining. (I used to mine but I quit). Technically it is good for the network because more nodes increase capacity and other metrics. But this might eventually discourage new node operators and make it harder to run for others. We will just see what the future brings.

3 Likes

it’s my understanding tho that SIA has contracts, ofc so long as it pay a profit on the storage it doesn’t really matter, but i could imagine getting stuck with contracts one might not want ofc with SIA one sets the terms… so i guess it’s not really that big of a problem…

my knowledge of SIA is pretty limited, been wanting to start up on it but the prices seems to jump all over the place… last i saw it was 14$ pr TB and the time before that people was fleeing because of low profits…
so i duno… been wanting to but haven’t had the time, been working on an ADA stake pool lately.
and HA hardware upgrades.

yeah test ingress seems to be the majority, maybe storj has difficulty seeing how the network behaves on current customer data and thus undulate between “high” test data and customer only…

more nodes will make yields pr node go down and make less people make new nodes which in turn will give more data and such…
this is new and thus ofc quite volatile, in time it will settle…
but yeah i do also suspect the small node operators will not end up being the majority of the network… however taking into account the benefits of the economy of scale… the advantages of distribution will also have limits.

been seeing 4$ pr TB stored avg profit on older nodes… so can’t complain about that.
even if my setup still isn’t really very profitable.

1 Like

Egress is looking pretty good so far this month.

Summary:

Per node and per sat:


Mainly coming from Europe-North (purple) and Saltlake (yellow)

2 Likes

yeah most of it is just repair traffic :smiley: So traffic is not that good this month yet but that was to be expected over the holidays.

1 Like

had been running a vetting node on my main node ip address… removed that yesterday to see if that would improve my ingress and egress numbers a bit for this month.
think it has been earning less lately, but also haven’t gotten full ingress for months at this point.
i know people say it shouldn’t matter which was why i placed an unvetted node there in the first place, but it seems like it consistently does reduce ingress when looking at others performance i was getting about 10% less consistently.

so should be interesting to see, wouldn’t have thought it affected egress also… maybe because of test data having an initial high download rate when its just been uploaded.
i duno… it’s been like i can see the 10% drop in ingress as a direct and immediate drop of 10% earnings.

at first i thought it was just random flux… but its been consistent now since xmas.

but just switched over last night so will need a day or two the result was pretty clear on the proxmox daily avg graph tho.

maybe i can finally get to 15 TB then… been wanting to pass that since before xmas lol

I guess my neighbour snoring in the house down the street makes my light flicker… of course it’s just a theory but it seems to correlate and consistently make my light worse.

4 Likes

how else would you explain that i’ve been seeing 10% less ingress compare to other nodes for the last 4 months, “since i’ve started vetting nodes on my oldest node’s ip”
while for atleast the 6 months of me running before that it would always equal the top of possible ingress when compared with others.

and obviously lacking 10% ingress will eventually lead to less egress… granted the egress thing might be due to other factors i cannot exclude, such as the node had also been sharing a ip for a period…

and actually a theory is proven, its called a hypothesis…
you troll… :smiley:

i’m seeing less egress than i’m use to and that’s a fact… trying to figure out why…
maybe i will, maybe i won’t…maybe it’s just random flux, or after effects of my rough treatment of it.
alas
10% less ingress seemed like the obvious place to start

Sorry but trolling you sometimes feels like the only reasonable response :stuck_out_tongue_winking_eye:
There are so many reasons one of your nodes could get 10% less ingress… My biggest node on my home connection gets less ingress than my other nodes that are ip forwarded from german vps providers. (all nodes run on the same host with the same hardware and same drive models) How’s that for logic? :smiley: Those nodes get more ingress even though their latency is higher and their HDDs are even more utilized.
So going by the same logic you’re doing, I would make the hypothesis that the bigger the node, the less ingress it gets??
Or my 2.5TB node that gets more egress than my 3.6TB node even though the 3.6TB is even older…??

In all seriousness though, one hypothesis for less ingress on that big node is that it is pushing all that repair egress and therefore gets less repair ingress (and we got lots of repair ingress lately).
But I just don’t know the exact reason.
However, I stick to facts. The split between vetted and unvetted nodes is fixed and satellite controlled. Speculating on its correctness based on your personal observations is just… pointless…

1 Like

you forget that it was performing better before, and nothing has changed aside from me adding nodes being vetted on the same ip address…
in regard to forwarded nodes performing better i have actually been seeing the same thing.
for some reason nodes routed through a datacenter gets better ingress… duno why… could be random chance ofc… have been pondering about why i was seeing that tho…

haven’t really done any measurements on it, but looks like maybe 5% … from the nodes i’m comparing it seems to be 3-7% they are out performing… maybe a bit more because i went from their total ingress and back when calculating the %…

checked 1st to 3rd april and the two nodes i’m comparing with (DC forwarded) vs home has a minimum of 2-3% when ingress was higher at the 3rd and else as high as 7% but i think the avg was somewhere between 3-5%.

the home one has beaten them for the 5th april tho… but cause i removed the vetting node from its ip i assume, ofc very early to tell… but it’s been behind the other two consistently, however they still share their ip with vetting nodes… so that is just enough advantage of negate their 2-7% higher ingress…

also all ingress goes through the slog on sync always, so ingress hardware limitations is exactly the same for all nodes.

my nodes gets nearly the exact same ingress… they have slight deviations, but it’s usually so low.
and not sure if it evens out over time… my two fully vetted DC forwarded have 42.56 and 42.79 GB ingress this month… i do think they do deviate on a daily basis tho.

yeah their daily flux is a bit larger and their total ingress is just about as even as one could hope for.
only have 3 nodes that are fully vetted to play with…

well it was why i initially put my vetting nodes on ip’s with already vetted nodes, but i’ve been noticing that mine always under perform in relation to others when people post their ingress…

so it really doesn’t seem that much of a stretch to conclude that the vetting nodes are in some way affecting the ingress to the vetted nodes… i duno why, but it’s there… now i’m testing if it works comparing it with my nodes that still run vetting nodes…

tho you do make a good point i should really use my 2 nodes that are both DC forwarded and fully vetted so i can compare their ingress more accurately because there deviation on ingress pr month is usually less than 0.1% apparently.

so weird that home ip nodes under perform… will have to look into that… maybe the datacenters run a better network algorithm or something.

you say stuff is fact but in truth you are just regurgitating what you have been told or read somewhere and have done no practical testing to verify if things actually behave like that.

then it’s easy to just say stuff isn’t true…
i can see it’s there… i’m very sure i know exactly why… and the numbers will prove it.

egress is a lottery we both know that… it does seem to have an avg of 4$ pr TB on well performing nodes tho

i could make the argument that this is already true because deletes go up, so the practical ingress that accumulate size is continually going down as node size goes up.
but we both know node ingress deviation is usually less than 1%

i’m not saying anything about the split… i’m saying the unvetted node on the same ip as a vetted node seems to make the vetted node get less ingress, and not a trivial amount.

i’m pretty sure i’m not speculating, and already have seen the increase of ingress as i would have expected today… but we will see how it looks in a few more days, when the deviation comes down and when i’m comparing my two equal nodes.

yeah but that’s the problem. You assume that the network is always completely stable and when you change someting on your end, it must be the reason for any change you see… That’s not always a given.

also don’t foget unvetted nodes still have 6 satellites, so it’s possible that node is already vetted on 3 satellites and “steals” some of your other node’s ingress for those satellite, while the remaining 3 satellites will take a few more months to be vetted. During that time it might seem like your node gets 10% less ingress because some satellites split 50/50 now.

well, some of my nodes are still not vetted, had more unvetted nodes the past 6 months and can’t say I saw any difference.
But if you can prove it, please do. Just provide enough prove before you bring it up because I feel like you are constantly bringing up new hypothesis to many topics (mostly without any tangible proof) even if already proven otherwise and it can easily confuse newer people…
It’s always good to confirm a statement or proof it wrong and I’m always interested in reading about it, as long as it’s more than just speculation and philosophical discussions.

wasn’t talking about “practical” ingress but real ingress without deltes.

i’m using nodes that have had comparable ingress for months with very little deviation, like we already have tested in the past ingress for nodes are basically the same, and thats also what i have been seeing with my fully vetted nodes.

only weird deviation i got is the same as you mention that the data center forwarded ones seems to do better than the home ip one, for whatever reason…

nope already taken that possibility into account when i started testing, but yeah that was also my initial assumption.
the vetting nodes are only a couple of weeks old, and i checked their vetting using earnings.py just a few days ago, none of them is above like 30% on any satellite.

i wasn’t expecting to see any difference so i wasn’t looking for it, but for an extended period i had started noticing that i would usually get less ingress than what people was posting…

initially i figured it was other stuff, have had problems during xmas with my ISP change, then last month my datacenter shutdown my vps’s for a day so that didn’t exactly help the accuracy.
but like you know daily ingress should or is usually very close on optimally performing nodes, when comparing to others online.

i will try to prove it, and i’m going to approach with practical testing which can easily be replicated.
but from the numbers i already got, it sure looks like it was the unvetted node on the same ip causing the reduction…

the world is a confusing place, just because i don’t always don’t bother proving the hypothesis i post, doesn’t mean they are all wrong, just that i see no benefit to actually locking down the knowledge, because it might not be very applicable to what i’m doing.
but i do go off on tangents and into irrelevant details way to often… i know :smiley:
i’ll try to keep my speculation more apparently marked…

it’s often very useful to debate topic and results to try to understand what they are saying, i’m not afraid of being wrong, i know some are very worried about such things…

but generally i find that when learning new things or doing experiments one has to be wrong a lot more than one is right before one learns how stuff works, or atleast sort of get close to understanding something… because in truth can anything really be understood at all.

I like theory, but i find that only through verifiable experiments can one really be sure things behave as expected, stuff is usually to complex to account for everything with theory.

the universe is very good at throwing curve balls and keeping it’s cards close at hand, and the topics often far more complex than the simple facts we like to try to condense it down to.
but there i go again and get all philosophical …

yeah i know… was just kidding around :smiley:

Can you please, any of you post an ingress graph about april’s bandwith usage? I see very significant drop on my side since about a week ago.
Looking at the dashboard, I see that 60-80% of the ingress is repair traffic recently. Is it same for you?

same, 10 times more repair.

1 Like

yeah april has been pretty much dead on arrival.
6th is like 20% of the month gone and we have gotten alittle less than 10% of what we got last month.
easter, network satellite software migrations, new month and such… i guess
really nothing out of the usual this happens from time to time

2 Likes