About 9 months ago, I migrated two nodes from using smaller disks to larger disks, from roughly 3-5TB to 12TB (set to 10TB), but I have not seen basically any growth in disk utilization since then. Is this to be expected? They’re both still sitting at around 3.5TB each, no other issues, near 100% uptime. The only errors I can find in the redonculous log, otherwise filled with successful download logs, pertains to satellite connection timeouts and a few download/delete fails. What am I missing?
Disks are statically mounted via /etc/fstab in Ubuntu server 20.04 running in a Proxmox VM with physical disk passthrough.
Hi @alfananuq
If there are no errors in the logs (other than the odd download or delete fail) and the uptime is near 100% then there’s very little else you can do.
On the dashboard of each node does it show available space?
Other than the fact that there’s nothing that can be done, can anyone offer an explanation as to why I’m seeing this behavior? Has anyone else seen this after an increase in node storage capacity? I’m wondering if there’s anything to gain by starting a new node instead. The nodes grew to max size pretty quickly when I first set them up, so seeing zero growth is unexpected to me.
I guess my misplaced trust in Watchtower did me under, haha.
Interestingly, I stopped, deleted, pulled latest and started the nodes, but they installed different versions… The first one installed v1.54.2 and the second one is now on v1.53.1. I think docker is just trolling me at this point… I’ll see if I can straighten this out. Thanks @Arkina !
Yeah, I have no idea why my nodes are running different versions since they both use latest image. I updated watchtower as well, we’ll see if it catches it in the next couple of days.
And hopefully, I’ll start seeing some ingress again, I’ll report back!
the docker image switched to something new recently…
it will not change much anymore, the storagenode version isn’t controlled by the docker image anymore… so most likely there was some sort of issue during the switch over to the new solution.
this is why you can have multiple storagenode versions.
Having different versions during rollout is a normal state. The rollout is happening by waves, what node will update first and what next depends on theirs NodeID.
yeah its pretty awesome to see that one cannot force change the node versions anymore.
had a weird issue, but i think it was unrelated a docker of mine broke, no clue why…
so i did a ton of stuff until i figured out the storage driver was wrong.
so fixed that and got everything back online, but then later noticed that my node versions had gone back to 1.49, i doubt that is really related to storj, but it did keel my ingress…
i just think maybe i somehow rolled my docker images back to an older version, that can happen when one uses different storage drivers.
did docker rm and ran the full run command, and they all snapped back to the version i’m sure they was before i wrecked my docker… i guess that is what i get for running docker on ext4 lol and randomly rebooting a bit to often.
anyways i feel it was a bit lucky that it wasn’t a completely antique storagenode version, as that could maybe have gotten the node DQ?
Might be time to get rid of old images with a docker system prune. Though please be aware that will clean up everything that isn’t linked to a running container. This includes paused containers and related images and volumes and stuff.