When writing these lines, the load average is around 0.4
:
top - 07:24:18 up 6 days, 20:26, 1 user, load average: 0.30, 0.43, 0.42
Tasks: 165 total, 1 running, 164 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.2 us, 0.4 sy, 0.0 ni, 94.4 id, 3.9 wa, 0.0 hi, 0.1 si, 0.0 st
MiB Mem : 3906.0 total, 423.2 free, 321.6 used, 3161.2 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 3291.8 avail Mem
But things are pretty quiet these days, I’ve seen the load average go between 1 & 2 some days. I’m not charting this though so it may have been higher at times.
I added disk 3 roughly one month ago, because disks 1 & 2 were full, and finally added disk 4 two weeks ago even though I know it’s not recommended to add more than 1 node at a time, but it’s simply going to take longer to vet them and fill them up, I’m aware of that
But as the power consumption is not horrendous on that setup, I thought “why not”… The whole setup has been averaging around 26.7W (ISP box excluded) for the last 15 days, that is ~1.67w/TB which sounds fine to me.
So, disks 1 & 2 are full, and disks 3 & 4 are almost empty, as expected:
pi@raspberrypi:~ $ df -H
Filesystem Size Used Avail Use% Mounted on
/dev/root 16G 1.9G 13G 13% /
[...]
/dev/mmcblk0p1 265M 55M 210M 21% /boot
/dev/sda1 2.0T 1.9T 141G 93% /.../storj/mounts/disk_1
/dev/sdc1 984G 903G 81G 92% /.../storj/mounts/disk_2
/dev/sdb1 5.0T 423G 4.3T 9% /.../storj/mounts/disk_3
/dev/sdd1 8.0T 69G 7.9T 1% /.../storj/mounts/disk_4
Notes:
- None of the newest nodes are vetted yet (varies from ~25% to ~75% vetted).
- All disks are SMR (boooo), except for disk2 (yaaay), which holds the rotated logs for all other nodes.
- All disks are 2.5", except for disk4 which is a standard 3.5".
- There is another reason that pushed me to plug disk4 early: I had issues with this disk in the past, so I switched it to a new enclosure and the new node it holds is kind of a guinea pig to test the disk and make sure that, as I suspect, the problems I faced in the past were caused by the enclosure. It’s too early to tell but so far so good, and all tests (
cyrstaldisk
, badsectors
, …) passed.
Initially, that’s the disk that lost 5% of the files of another of my nodes, currently running on disk3 (see topic Storage node to Satellite : I've lost this block, sorry - #70 by Pac - surprisingly, it seems like it will survive).