Bandwidth utilization comparison thread

Maybe it is from loss of that traffic before your update?
I don’t know… I have 100% but it might be a rounding error? it’s 0.00005% or less. I know it’s not perfect 100 .
I have 100% but I have errors due to closed connection downloads (I would interpret those as fails) or pinging benten.de that fails around 80-100 times a day…

If it won’t creep up I wouldn’t mind those.
Or maybe perform stop and rm and recreate your node (in docker “rm” not by deleting files - too valuable :rofl:)
This will empty logs and you can see if recent logs have some failures than your whole history (don’t know how you check your failure %).

1 Like

turned out it was the new update that shows very low variations… this was failed audits from like many months ago… which is why it’s almost recovered… not sure there was enough for me to ever get below 100% back then tho…

but 1.13.3 is the change… so nothing to worry about also checked the logs 5 weeks back no issues.

you will get fails from time to time… but if everything is running near flawless you shouldn’t see many of them… maybe a few a day… i get between 0 and 5 and maybe 40-120 on a bad day when i do lots of other stuff on the system

1 Like

0 bytes of ingress for two days for me. Anyone else?

1 Like

you are most likely running v 1.12.3 or older

in most cases older versions than 1.13.3 will get zero ingress… nobody seems to know why yet afaik

so update your storagenode version.

you can do this manually
by following the guide here
https://documentation.storj.io/setup/cli/software-updates

1 Like

@SGC we have measurements in place to encourage updating nodes. Every node thats not within the “recommended” minimum version range (to be found on version.storj.io in the processes section), are not getting ingress data.
We do discourage manual updates this was, as its fundamental for our network to make progress and have a unified node base. If the spread of versions is to wide, then changes to the protocol wont work together.

The minimum version on version.storj.io is the actual minimum allowed version, everything lower than this is entirely unsupported. The second one is a soft limit to encourage updating :slight_smile:

2 Likes

well if his auto update didn’t update to 1.13.3 yet, then obviously without any ingress… he would want to update it…

i wasn’t trying to say manual update is better that auto update, just that the reason his storagenode wasn’t getting ingress was due to it being not the latest version… maybe thats because you guys just moved the soft limit… i duno

i made this one.

but happened immediately after the update… so many nodes didn’t get a chance to have time to actually be up to date and thus got “cheated” out of their ingress…

maybe the soft limit should have a delay so the auto update can complete it’s update cycle before starting on “encouraging” node to update… :smiley:

and i will be using the new version of auto update when we figure out how it should work exactly.

1 Like

This. Spot on.

Watchtower didn’t work for me - it broke the nodes 2x and simply didn’t update most of the time. I only have a few months under my belt but several nodes and if everyone else is using it I might want to give in and embrace auto updating. I guess it’s time to fire up a monitoring system of some sort - maybe Elasticsearch/elasticstack.

2 Likes

i haven’t setup auto update… it’s not really something i would do after working with software a good deal… seen to many bad updates being pushed, so i want to verify it’s good before i update “critical” infrastructure

1 Like

Almost everything on my server runs as a docker container and I update every single one of them manually. I do that maybe once a week.
Might not be the best solution for critical updates but at least it prevents bad updates from causing problems.
As for other products like Nextcloud, Home-Assistant, etc I actually never update to the next major version because it has too many bugs that break things. I always wait for the first or second minor update. Then it is safe to update :smiley:
(Much like prior windows systems. Always wait until the 2nd service pack if you don’t want to have BSODs all the time :rofl:)

2 Likes

I guess I’ll go on the counter side and say I let watchtower update mine and have not had an issue as of yet (knocks on wood). Nextcloud, since it was mentioned, I can’t even have the containerized version of that update without going nuke and pave each time.

1 Like

Ingress gets lower and lower :frowning:

4 Likes

what ingress lol

yeah i would have expected it to have gone up a long time ago now…
i am still seeing significant egress, and repair egress have been dropping a lot.
so hopefully the task storj set the network to perform is done soon…

in the meantime i’ve starting migrating my main node to a new pool…
which is turning out to be a monumental task copying a active 14 tb node multiple times…
because i lack enough disks to only move it once, to get the final configuration i want…

hopefully i can be done before ingress picks up again… but right now my hopes are not good, if all goes according to current estimates my migration will end up taking a month… O.o

3 Likes

Im ok with this you only get paid for egress anyways, As long as egress doesn’t stop since my nodes are full anyways.

1 Like

there could be an argument made for limiting ingress and just turning up egress when storj is testing the network… doesn’t make sense to keep to many nodes filled for no reason, since it’s not really instant to delete it all again

i won’t speak for storj’s motivations behind the current ingress, but it might mean nothing at all or that they may be expecting real ingress to pick up soon and thus want to keep the network ready.

or that their testing of ingress is completed and for the next while they will just have lower ingress and larger egress… to test other behaviors of the current network…

or they are still doing maintenance, that stuff is really nice to get done before stuff gets busy… they may not get proper chances for doing that stuff as fast as they can now… may take years before they will want to go through all the stuff they are doing now again.

2 Likes

Migration of my 2TB node was monumental… (for my slow disk haha ) so migrating 14TB can be real challenge…
Maybe buy those new SSD drives? they are like 8TB now :open_mouth: and have ~10x writing speeds… too bad they are kind of still expensive.
Good luck man!
I was using sync and screen for uninterrupted transfers.

1 Like

took a little under 7 days for the first move…

with the storagenode running ofc, am getting getting to the stuff that can be transferred sequentially now / less iops required for these big files… so been seeing speeds of up to 230mb/s and it seems to be that low because it doesn’t get time to ramp up, not sure why… but it’s like it takes a brief while to go into high gear… 2-3gb files tho gets 200+ and i bet if they where larger i would get more…

initially i thought something was wrong because it seemed slow… but i’ve done tests and i can get good speeds even tho there seems to be some bottle necks when getting to 500+
most likely because the raidz1 of the pool aren’t equally sized… so not all data will exist on all drives

but now i’m soon ready to take apart the old pool, which gives me 10 free hdd’s

8tb ssd’s well as tempting as it sounds, and it would be great for many reasons… and to be fair price isn’t as bad as one might think… i think it was 5x last time i checked … ssd cost compared to hdd cost.

ofc that’s the simple purchase metric… the math gets much more advanced than that… and i really have been considering if thats the route i should consider taking.
one of the strong points of the ssd’s is their highly variable power utilization…

my current 1.6tb ssd uses 8watts in light continual usage… and then can pull up to 25watts in my system because my old pcie bus cannot give it more… .but the ssd could pull 37watts which would boost performance an additional 50% from my current max at 25watts … O.o

found out after i bought it that i should or some people have found the 6.4tb versions for almost the same price i paid for mine… :frowning:
but i suppose those sold out real fast when people discovered they where up for grabs…
at those prices it would essentially be 1/4th the price of mine… which is about the 5x mark compared to hdd…

then it isn’t really a question of what to use anymore… so yeah i really have second thought every time i order hdd’s

but for now after i’ve migrated hopefully i won’t have to migrate anytime soon

1 Like

many small files, like storj’s, do not play well with being transferred disk2disk- a lot of metadata to write for each file and a lot of atomic, synchronous, writes to make. Speed might pick up if the disks can tolerate a slightly deeper queue depth (aka parallelization) when working with many small files.

Imagine dealing with OCR’d PDFs that number into the millions- not fun.

1 Like

create vhdx and attach it. and copy one big file.

it’s 14 million files, or so, which makes them pretty small but ofc they can always be smaller…
got a program that has like 1.2mil files which takes up like a few hundred mb…
openNMS used for keeping an eye on my network gear and connections, but it a bit enterprise level software, so don’t expect it to be consumer friendly.

i don’t think anything gets worse with large query depths, because it makes the system able to predict where it needs to go, and thus it can start to optimize using NCQ and such

with my writing from 3x raidz1 of 3 disks each to 3 disk in one span… yes i know… not recommended and its a little risky… but i’m sure it will be fine… so long as this doesn’t take like ,… ages… i’m predicting it to 9 days and hopefully ill be fine for those 9 days.

less than an hour to my scrub is done, then … oh i just realized why i cannot go past 300-400mb the drives i’m writing to are only 3 so x3 speed… which is maybe 300+ at best…

DOH

because of me using 3x disks or raidz1 i get 3 times the iops
even tho and with the storagenode running… iops being the limitation for my migration
max transfer speeds i saw was 42mb/s and minimum was 18mb/s

@serger001
i like the idea, but wouldn’t that just make one pay all the time instead of only during a migration?
files are split up for a reason on storage, so that the system can better manage it and more easily access it, save on caching, ram and loading times…

sure it should in theory go much faster with 1 big file… maybe ill do my next storagenode as one of those… could be very useful

in my new setup i will have a 6 drive pool of mirrors, which would give me 3 times the iops i currently have, ofc thats reads… still mirrors basically get read like single drives, which is basically optimal and then write is ofc ½ because one writes to both…
but still thats like twice or more times even moderately sized and in cases of like 6 or 8 drive raidz … its 6 - 8 times the iops… ofc write is ONLY 4 times the iops… but you get the idea…

would mean the 6 drives i will use for my mirror pool would have … lets call it 300% hdd iops write, and thus my 2 x raidz1 of 4 drives each so a total of 8 and it will have 200% hdd iops because of it being 2x raidz1… so 6 drives in mirrors vs 8 in raidz1’s which optimizes the iops they can get … and so the mirrors win by 50% better iops which is a lot when one is talking about copy speeds…
when iops bound

so that should be interesting…

1 Like

Interesting idea, but I doubt it could be copied while the virtualisation is running. And copying 14TB file could be time consuming also, even when it is one file.
With my rough calculations 14TB with a speed of 300MB/s would take around 13 hours.
So the node would have to be offline for that amount of time.
While doing it with sync software it would take many days to copy, but node is online all of the time until final “switch” (that would probably take about 5-15 minutes if you go carefully checking the new node running statement)

1 Like