Multiple Nodes per HDD to lower impact of node disqualification?

Yeah, I’m not using watchtower so I’ve just been doing the manual process when I spot that the latest version is finally available through Docker. (Got my own questions about the availability but will eventually spawn a separate thread for that discussion.) Based on the available Storj documentation you first stop, then remove your node before updating the Docker container. So I had no idea that I could pull down the new version first while my nodes where still running and then stop/remove/restart my nodes when convenient.

I assume that is exactly what watchtower does. Every now and then tries to pull a new version. If new version is found and downloaded, it then stops/removes/restarts the nodes so the nodes are then live with it.

You are right. And why do not use it?

Regarding my very first post in this thread.

My node crashed after years of reliable work. I am still wondering why. I might hav a clou. On a usable diskspace size of 7.22TB (Synology lets you only use 7.22TB on a 8TB disk; the difference is used for a raided OS and for the file system itself). I gave my node a 7.2 TB space.

Since over the years I realized that the given size (=7.2TB) will not meet the actual size the “full filled” node uses on the disk, I took the risk. :neutral_face:

In the beginning I felt there were divergences in disk space estimations.

So, today, is the 10% Overhead still the thing? Is diskspace calculation nowadays correct, so 7.2TB in node-setting really is 7.2TB on the disk.

Tell me guys, how much Overhead do you really consider and keep ?

Sure, sleep is part of POSIX.

Yep. Nodes will keep running the old version until you recreate them.

But watchtower isn’t supposed to automatically update it?

Thank you.

I’m not using watchtower, I don’t trust it as an update mechanism. Besides, it’s only a recommendation to use it, not a requirement. Not all platforms can use it.

Personally, I use 10% for 2TB or less, and 200 GB for any node that is larger. Although back in the early days there was a bug that caused over usage, and if I had kept only 200 GB on my 4 TB node it would have filled and I would have been in trouble. But the software has come a long way since so I am taking a calculated risk. And I keep an eye on it. Just make sure you are not mixing Terabytes (TB, base 10) and Tebibytes (TiB, base 2) when calculating actual available space.

Watchtower does automatically update your containers. It checks at a random interval between 12 and 72 hours and when an new image is available it pulls the new image, stops the node, removes the container and restarts it with the same variables as the last run. If you specify it with the --cleanup flag, it will also purge old images so that your don’t store old images indefinitely.

Some have had issues with watchtower in the past. I have used it since launch, and I have only ever had one small problem. I consider it reliable enough, and less risky than letting your node become out of date and being DQ’d for running too old a version.

3 Likes