Major Bump to Ingress Bandwidth

AI starting to get better with hands… but still…

1 Like

Nom Nom … Break noted as well. Very active on deletes & ingress these last 12 hours … 24hr TTL? Ingress here has upped a bit to about 8-9x normal. All the while disk space used, after deletes, is staying relatively the same. An interesting workout.

Edit: I may add that b/w is swerving all over the road, haha that’s experimentally satisfying.

2 cents,
Julio

1 Like

My nodes are full…

2 Likes

I can’t tell. Disk space calculations with dedicated disk is still not working ver. 1.123.4 This node has around 5.8TB

1 Like

My trash is full

2 Likes

Wow that is quite a amount of ingress :exploding_head:

1 Like

What testing at this time? Don’t think it’s a customers data.

1 Like

Trash can’t be full. It dosen’t have limits. :nerd_face:

8 Likes

That messed with my brain for a second. OS-level saw all that Mar-1-retrash data getting deleted (so more space)… but node said a firehose of uploads was coming in (so less space). What the? :stuck_out_tongue_closed_eyes:

1 Like

funny the big ingress spikes started at 6:00 Your local german time, and thats 0:00 in Washington, USA. So a new day: Sunday in whole east coast, New York etc. Some schedule has to be set up for some backups i guess. Maybe some D.O.G.E. worker found us as a cheaper alternative for some gov’s docs backups :stuck_out_tongue:

8 Likes

I’m showing the same, 100+ Mbps ingress… But interestingly the storjinfo site is reading a loss of about 6PB??

1 Like

The graph includes data for Storj Select too.

1 Like

I have to fiddle with the storjinfo reports: like try different durations: because there are so many data-dropouts.

Like the drop Nodemansland just referenced is about the size of the EU satellite temporarily missing numbers (so if you switch to 30-day duration reports: the gap is smoothed over and we’re back to 28TB used)

Edit: Looks like the size just refreshed and fixed itself:

3 Likes

I wonder if this is sticky data or slooowly-sliiiding- than-kaboom-into-trash data?

2 Likes

The pacman closes it’s mouth more and more each day.

1 Like

Very little data seems sticky these days.

I guess we’ll find out in a week or so :wink:

2 Likes

It seems we are back to normal ingress.

2 Likes

IMG_3457

3 Likes

That is intensional. Dediacated Disk tells the node to not worry about used space and stop tracking it. Instead just fill the entire disk until there is no free space.

1 Like

Is it test data ? And we’ll see the TTL thing all over again ?

1 Like