Which didn’t work for quite a while, rumor has it they had to create BFs to fix that over-run of a problem. (all conspiracy theory like in attitude n’ stuff) lol.
OK OK…- that’s so like last-year vibe n’ stuff.
.75 cents,
Julio
Which didn’t work for quite a while, rumor has it they had to create BFs to fix that over-run of a problem. (all conspiracy theory like in attitude n’ stuff) lol.
OK OK…- that’s so like last-year vibe n’ stuff.
.75 cents,
Julio
I was ABOUT to write that I hadn’t seen the bump in traffic. But low and behold I look at my dashboards and there was a big spike on May 8 and then after that ingress has been very roughly double what it was previously. So never mind I guess
if the TTL is 7 days when we should start seeing the deletions hitting roughly today or tomorrow?
I started TTL monitoring only a few days ago so I don’t know when the 7 days TTL upload begun. I was courios why nodes have very little growth while traffic is high?
However, there is also a significant part of none TTL data deleted within one week or less. My guess is 80 to 90% of overall uploads having a lifespan of only a week or less.
What stands out for me is very high repair egress - 12 times higher than repair ingress.
Somehow I have data that most people don’t
I meant to ask that for a long time. Why is repair egress so huge all the time? It’s from a third to a half of all egress traffic. That can’t be normal.
Is there a lot of node churn?
regarding TTL someone know why needed TTL like that? 9999-12-31_23.dat
it will never expire
You confuses TTL objects with deleted objects - they behave differently. TTL objects will be removed automatically by nodes and by the satellites database backend without moving to the trash and without involving a BF and GC.
The deleted objects are collected by BF+GC. There is an unknown bug, which periodically moves wrong objects to the trash, developers are investigating this problem, and will eventually fix it.
I believe it’s a part of the restore from the trash problem. Repairs are trying to restore objects which probably were moved to the trash by the Garbage Collector.
Still rather disappointed, nobody seems to have confirmed the 24 hour cyclical 50 Gbit/s TTL data happening with this new client. It’s really hard to ignore stats like this. However, I am enjoying the vigorous on-again off-again testing they’re doing, and the subsequent deletes. It’s nice to see the network get a workout that’s not necessarily test data (the precise nature of which, I haven’t bothered to check into/verify as yet.)
On a network averaging 6-8 Gbps max throughput daily, you’d think others would have quantized this personally, and provided more feedback. Are my nodes that good that I get affected by every time they turn their 50 Gbps spigot/link on and off (as far as I can see they’re adding 40+ Gbps at times to Storj’s network), and then subsequently delete/TTL all their stuff?
Hurrmmm, I am probably the one who’s insane.
Maybe it’s Vivint/Storj regurgitating their overflow of Select based non-essential recovery shards - that’d be cool. On the upside, they have finally been pulling some substantive and sustained egress/download data back again today.
4 cents,
Julio
Something something… gift horse… something something… mouth.
I see rather large spikes as well. I peaked at 800mbps ingest a few weeks ago, during may8 where this thread was created. Am I disappointed? No: This is what I signed up for. Do I have performance issues? No: They were all ironed out during the stress testing last spring. I am pleasantly surprised that I found out about these incredible spikes - not from burning equipment at home, but from startled users in here, and to then confirm what I read on the forum, by reading the logs from my machines.
Would I like that all of the data ingested would be indefinitely sticky? Sure, I would love that, but I see these performance tests as yet another pair of steps on the endless stair towards StorJ continued success. Maybe this will produce a whitepaper in the end, showcasing an exotic usecase, enabled by a robust network? Maybe the performance findings from these ingests can be used by Storj behind closed doors to aid in onboarding new customers? Who knows, but the fact alone that nothing is burning and that the network is taking a beating without drawing a sweat is a huge success to me.
8 cents
Time to time it also give good Egress. so someone downloading this data, it looks more like file sharing service with TTL of data.
Maybe it’s an AI machine dreaming…
I’m not seeing large spikes on a daily basis, at least not that shows up on my router. So I guess I’m… left out.
They are not for everyone, only for The Chosen Ones!
Personally I’m sharing 24TB of storage over 5 different nodes on a RPi-4 (4GB) and my memory usage never peaks at the point to make it crash.
I have 1xWD Elements 16TB, 3x 2TB Seagate and 1x4TB Seagate Surveillance attached, the max. RAM usage barely reaches 2GB out of 4 available during most aggressive I/O peaks, HDDs are attached through SATA to USB3.0 cables but, even so, are still providing a satisfactory read/write rate and I’m really happy with my current setup.
I’ll be adding the last 4 TB HDD within the next week to this Pi to complete the setup then I will let this baby do its job and purchase a new RPi-5 to start over with new nodes
Still plenty of potatoes to fry here, letsgoooo
okay I have seen the occassional ingress spikes now. bursts where , aggregated across all nodes, we’re talking baout 80Mb/s of ingress. So that’s interesting.
Last summer’s stress tests took down my nearly identical setup on a daily basis.
if this new traffic has vibes of the big ingress and the 7 day TTL delete then it could feel like the stress tests, but I think we’ll be better off and have less crashing because:
And my personal rig should do better because I have a working L2ARC cache on all my drives and am no longer attempting to rsync nodes bewteen disks.