Updates on Test Data

Indeed. If I understand correctly, now satellite penalizes you for each lost race with less traffic, regardless of origin of that traffic. So if you operate a node in, let say, South America, if you fail too many connections from Europe, you will get not just less uploads from Europe, but also from South America as well.

3 Likes

24 hour stats : 11.874.428.995.958 Bytes recieved - shared among 105 nodes and 37 subnets.
Success rate was 91.006% (11.884.167/13.058.712 pieces)

Th3Van.dk

3 Likes

Thanks for reporting your stats. I hadn’t looked at your overview page in awhile: are you still down around 400TB since the bulk-deletes started this month? You’ve lost more than most of us even received!

Date          Available hdd space            Used hdd space                  Free hdd space    
-----------------------------------------------------------------------------------------------
2024-05-14 - 1.986.332.595.376.128   1.508.894.220.869.632 (75.9638 %)    477.438.374.506.496 
2024-05-18 - 1.986.332.595.376.128   1.283.717.187.338.240 (64.6275 %)    702.615.408.037.888 
2024-05-26 - 1.986.332.595.376.128   1.063.635.606.560.768 (53.5477 %)    922.696.988.815.360 
2024-05-27 - 1.986.332.595.376.128     960.995.384.373.248 (48.3804 %)  1.025.337.211.002.880 
2024-05-29 - 1.986.332.595.376.128     883.428.500.992.000 (44.4754 %)  1.102.904.094.384.128 
2024-06-02 - 1.986.332.595.376.128     801.556.970.901.504 (40.3536 %)  1.184.775.624.474.624
2024-06-02T00:22:38+02:00       INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "275h19m2.901525464s"}

Th3Van.dk

1 Like

This is mean that your node looses races for pieces - your location seems too far from the libuplink who uploads data (not the satellite - the satellite doesn’t matter much, because it’s not delivers this data).

Is that a regular cleaning you are talking about or some special cleanout planned for this weekend to get rid of the mess with different levels of mixing between old and new trash folder organization.

big GC was 7/10 days ago… now it’s time for deleting

2 Likes

Well, I can’t wait for that to happen. I have 1TB of garbage on a full node and I can’t wait for that to go. It’s making me miss on all the testing fun…

I am talking about what will happen on all other nodes except yours. I am unable to help your node. You might want to start graceful exit to safe us both some trouble.

It’s a TTL, not the specific “doom day”.

wheeeee…
image

1 Like

Mine is bigger!
:face_with_peeking_eye: :grin:

1 Like

You show us the finger? :rofl:

Take 2! :stuck_out_tongue_winking_eye:

The node is less than a week old.
But the testimony is fake; in fact, she received very little. The router says minimum speed.

The photo shows the ID for analysis.
( 12eKHcNoNFHfUAcqiBST8JiJBCVcgQ5y48qCLcz2Ahfkydq8k5b )

Loading a channel looks something like this

1 Like

@littleskunk after upload tests, did you made any analyses of piece distribution. Because if you chose only fastest nodes, all pieces can end up in 100-300km Diameter on same provider or something similar. That will be very dangerous for file safety.

2 Likes

The power of 2 node selection isn’t that biased. I am not concerned at all.

2 Likes

All I can say at this point is the deals continue to be promising and some of them are heating up.

16 Likes

:heart_eyes:. Great.

4 Likes

I’m moving some HDDs about to make room for more nodes.

You know, “just in case” :wink:

1 Like