Monthly updated new node report

Update: First 100GB in 100hours!

So I changed the Docker settings to log to a file. Somehow docker did not have the permission to write that file in the home directory of the users. Which seems strange to me, because I start docker with sudo? Anyway I changed the file settings chmod +x and chomd 777.

Anyway I was what felt 15min offline and now my score for eu1 is 90% :grimacing:
Well, that is bad luck but fortunately won’t matter in a few days.

So the node is still growing at roughly 1GB per hour. Will be interesting to watch how that changes or does not change.

2 Likes

Success rate script

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            15 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                69 
Fail Rate:             0.484%
Canceled:              134 
Cancel Rate:           0.939%
Successful:            14062 
Success Rate:          98.577%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                19 
Fail Rate:             0.034%
Canceled:              78 
Cancel Rate:           0.139%
Successful:            56194 
Success Rate:          99.828%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            1 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            7029 
Success Rate:          100.000%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            253 
Success Rate:          100.000%

Strange, still see this in logs:
2023-10-29T19:06:28Z INFO failed to sufficiently increase send buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://git...

Will look into it when I have more time. Dashboard thinks QUIC is ok.

In Linux it requires some tweaks to do not have this warning:

1 Like

I did.
Here is my output:

trashDebian:~$ cat /etc/sysctl.d/udp_buffer.conf
net.core.rmem_max=2500000

Am I missing something?

What’s reported by

sysctl net.core.rmem_max

net.core.rmem_max = 2500000

Then I suppose it has everything set, now you may re-create the container, it shouldn’t complain about buffers anymore.

1 Like

To-do list:

  • Create stats for how many connections are IPv6
  • Data usage mismatch

This update comes a little bit late because I got a timeout from the mods.
Anyway the first 33 day are over at the node ist currently at 1.07TB!
More than expected!
My Online score also healed to 99.17% again.

Now I have to find out, why the node shows 1.1TB (with trash) and TrueNAS only shows 1018GB and a compression ratio of 3.86.

Looks normal to me. It’s a x1000 and x1024 conversion difference, as far as i know.
My node looks alike. 1,64 on disk 1,73 on node dashboard.

You are right, but compression ratio of 3.6? That can’t be true

3.86% maybe? otherwise its unrealistic (imho)

It can, if you have a lot of small and/or sparse files. Make an experiment, create a dataset, copy 1k of data from /dev/urandom to a file, and then check file size, aparent size, and dataset compression ratio (after scrub).

For reference, my newest node has compression ratio 1.52x, and the newest - 1.33x. Databases compress even better – 2.72x.

No, it shows 3.6.

Which seems strange. If that were true, the 1.1TB form STORJ would use 3.92 because of RAIDZ2 padding and parity, but be compress-able back to 1.1TB.

Not sure if Special VDEV is also to blame here. But that is only 10GB occupied so…

Day 70 we are at 3.55TB! December had an insane ingress!

2 Likes

A little over xTB/node of ingress for me. The fact that no GC was run for last week and no delets to trash happened, adds to normal ingress, and we will see a drop when delets will start, but if xTB/month becomes the norm, will fill up my nodes in 2 months and new big drives will show up to my door. :sunglasses:
The first 22TB Exos is up and running.

2 Likes

Day 91: 4.7TB

Still have no idea why TrueNAS shows 4.8TB and a compression ratio of 4.54

It could be difference between base 10/base 2, also rounding

Not concerned about the difference in size, I find it strange that TrueNAS assumes a compression ratio of 4.54, even though we know that STORJ data is not compress-able.