Bandwidth utilization comparison thread

From what I remember test satellites are stefan-benten, saltlake and europe-north.
But yes we can’t know for sure where all this data comes from but one thing is sure it’s that I’m glad it’s here !!

3 Likes

8.75tb left of my migration… so about 33% already in like barely a couple of days…and i just added the slog yesterday which seemed to greatly speed it up…

then no more migrations for me anytime soon… ofc thats not easy to know… but should be able to go to 33TB node size then… maybe a bit more… not totally sure on the exact final size… will be interesting to see if the deleted % will eventually slow it down so much that it will stop growing…

when it gets to that size… in well… next year maybe with a bit of luck

haven’t really found the test satellite info very interesting, i don’t really see how it’s useful for anything…
so didn’t really pay any attention to it… but sounds familiar… and would sort of make sense… but no real clue here lol… and still don’t see if we can use it for anything…

i would say tardigrade media coverage is most likely to be able to predict how much data we will get… test data we cannot predict… tho storj labs might tell us from time to time… but really we can’t track it anyways, they could basically just throw near random numbers at us and we wouldn’t be the wiser… because we cannot really know most of this stuff unless if all basic node data is collected and tracked some place where we could see it.

2 Likes

The biggest repair did seem to come from europe north, i also noticed in the last 2 days that a lot of data got downloaded from the europe west satelite, lets hope this trend continues.


2 Likes

a lot being less than 5% of the max we’ve seen this year…
oh ingress how far you have fallen lol

2 Likes

man those graphs look nice

“a lot” seemed like an overstatement indeed, still, it’s a nice increase, for me more than 40gb upload a day is nice.
@TheMightyGreek those graphs are grafana using data from prometheus combined with node exporter, search the forums for more info.

1 Like

and look to windward!

(edit : Consider Phlebas)

What is “slog”? is it something ZFS related? Can you explain that? if you don’t mind?

NODE1(626GB/10TB) vnstat (tx is rx & rx is tx)


I hope this increase in ingress continues
biggest ingress comes from
1 europe-north
2 us-central
3 saltlake

yeah just 600GB with plans to get to 10TB it would take a lot of time with ingress that was lately. Fortunately it picked up a little bit. So now the time to fill it up will be shorter.
Now ingress is, i think, about 25-30GB/day so 10TB will take… ~400 days? :upside_down_face: or maybe ingress will pick even more (we had some days with 100GB/day)
To all great ingress days !! :nostalgy emo: :rofl:
Cheers!

2 Likes

People shouldnt forget that the more ingress we get, the more interesting it might seem to new sno’s. Luring in new sno’s means the ingress gets divided over more people, so ingress goes down per user. This means that we will probably never see things like 200gig/day. It’s a selfbalancing system i guess. (this i purely my thought about it, i have nothing to back up this claim)

3 Likes

I know that storj is a waiting game so patience is the key, I’m also using almost half of the drive for storing other non critical stuff so I can remove those once storj need the space.
For now I’m happy with my node for paying it’s own electricity cost :sweat_smile:

1 Like

Yes, SLOG is a ZFS component that is tied to hardware. Specifically, adding a “SLOG device” is the practice of adding a fast, power loss protected, SSD to the system and dedicating it to dealing with caching the writes from RAM before they’re written to slower media. This is useful if you have a lot of writes going on with high enough queue depths that grouping the writes together is more beneficial for the slower media and/or the amount of writes are greater than can be stored solely into RAM before pool’s media can catch up. You need to have a use case for this though- SGC’s currently is to assist with his migration but after that is finished then he will see less of a purpose of having a dedicated SLOG device rather than the writes being co-located on the pool’s media.

Edit: yes, I “ZFS”. …maybe a bit too much.

1 Like

hmm this is not entirely true. the slog is only used for synchronous writes which are to be written to disk directly without being cached in the RAM. Async writes don’t go through the slog and only live in RAM until they are flushed/written to the disk.

(Therefore a slog would not speed up a migration as that should be an async operation unless SGC - as so often - decided to make it a sync only dataset, which increases reliability a bit but is typically unneccessary.)

In my case I don’t fore sync operations on any dataset so my slog is only used for operations that are sync by default like databases. So my slog is being used for the nodes databases but nothing else.

1 Like

A SLOG is ZFS’s version of a dedicated write cache, ZFS per default will used what it calls a ZIL (Zero Intent Log) which to my understanding is a dedicated part of the HDD or pool which ZFS will initially write data to, because it’s faster or something… then after that it will then write the data again to the location it was suppose to write it to…

this basically doubles the workload required for writing to the drive or pool, a Secondary Log or SLOG moves these ZIL writes to a dedicated device and thus improves speed greatly…
not really sure why it works like that in the first place… zfs is very advanced… and in truth there is a fair bit of extra speed gained from using the faster parts of a HDD.

[HDD mechanics explanation]
HDD speed changes from the outer parts of the platters to the inner part of the platters because the RPM is fixed… and thus the closer the read head gets to the center of the platter the less platter will move underneath it… because geometry :smiley:
Thus writing in the outer parts of the platter gives you like 30-40% increase in write and read speeds, which is why people would under partition hdd’s for performance.

i think thats why they always use the ZIL, but there maybe other reasons, it’s ZFS there are most likely 10 different reasons to why they made it do that…

with a SLOG the task is moved away from the HDD, and stuff like random writes can be made into sequential writes.

most of ZFS is focused on Large scale storage and data integrity, it can be quite unwieldy at times, but it sure is a great file system.

very advanced stuff, i don’t know 50% of all that it does… but something like that.

1 Like

yeah that was why i didn’t add it the first time around… but 18 days copying stuff was to much for me… the SLOG took write to 4 drive raidz1 from 8MB/s to 42MB/s maybe even more but seems to be stable around 42MB/s

hadn’t expected it to do anything really because i was running sync = standard on everything

the point in using Sync=always is when the storagenode is active to reduce fragmentation of the drive… i use the SLOG for that, but that doesn’t mean the data in the slog is used for anything… the system will use the data in memory, the SLOG is the backup, but without the backup it cannot send ACK’s for writes, which counts and written to the HDD pool even tho it’s written to the SLOG SSD

it even made the server sound very different :smiley: now it’s quite… outside of fans ofc… and then it growls every 5 or so seconds… hehe

anyways… zfs is advanced… i duno why it worked… i was just trying anything i thought could even have a remote chance of helping…

i double checked… sync is standard, atime is off and xattr is off and compression is zle,
i could stop the transfer and remove the slog and see what speeds i get… but i’m 98% sure it will just slow to a crawl, right now i’m getting 48MB/s

maybe ill try it just to be 100% only other thing i changed was rsync -B 131072
but that was after i had gotten the speed increase i think and didn’t seem to do anything really…

yeah checked it… i’m getting so good at this zfs thing :smiley:
without the slog copying the orders.db is at 12MB/s and with the slog it finished at 140MB/s
i ofc stopped the rsync transfer and started it over again… have to do that many times anyways so a couple more ain’t going to change much :smiley:

100% sure the slog makes my writes to my new raidz1 go much much much faster… in the case of orders.db 11-12 times or whatever but on avg more like 4-5x maybe 6 when it’s going good… the files are not uniform in size or whatever… some stuff goes slower and some faster… also maybe simply to do with where they are stored on the drives…since the drives are nearly filled… so the end will be a fair bit slower than the beginning…

i duno… it works… ill take it :smiley:

1 Like

weird… but if it works, it works… but why not use zfs snapshots? should be faster

Just curious have you given that server a break since you put it together? Cause it seems you are giving that server a run for its money.

3 Likes

servers don’t have unions, they don’t get breaks and they are only paid in electricity :smiley: However, work conditions for SGC’s server seem especially bad in comparison, high humidity, high temperature, old age… guess they don’t get retirement either, it’ll have to work until it dies :smiley:
And if they protest against their working conditions, you just replace them with a younger version :rofl:

However, HDDs don’t like bad working conditions that much and you might need to replace them with a younger version sooner than you like if you treat them badly :wink:

6 Likes

That was the perfect explanation I had thought something similar but that was very vivid, I have new hardware that doesn’t even work that hard also im always scared of hardware failing when using it 24/7 so I like to give it at least a week break every now and again and give them plenty of cooling so they don’t just quit on me.

1 Like