SNO on Kubernetes?

With more and more environments moving from “raw” docker to kubernetes, I would like to know if people here have succeeded in moving their SNO to k8s?
Yes, it is not for the faint hearted, and yes, it will take some time before the smaller home docker hosting solutions (like SMB NAS systems) move to k8s. But I’m using k8s professionally, and wanted to give it a go. So I’m in the process of setting up a k8s environment at home (microk8s for now, waiting for VMWare to clean up their act). One of the things I haven’t done yet is migrating watchtower. Updatekate does not look as an ideal replacement. I do not want to add Quay.io as an additional dependency, and on top of that, the updating needed is not CI/CD driven. Any pointers?

You should use the persistent volumes for storage and the identity should be deployed either as ConfigMaps, or on the persistent volume too.
There is no easy way to autoupdate the storagenode in the k8s. You should use something like a Scheduled Jobs in k8s and check updates yourself. I’m not sure is it possible at all.
The k8s is designed for CI/CD processes mostly. So, you should have some external tool like a helm to update it regularly. Of course you can deploy a new version even with a kubectl each time, when the version is updated. But anyway it would be an external tool.

I wouldn’t recommend to use the k8s for storagenode. Most of volume drivers are network-based. The SQLite do not like network attached storage, especially NFS.
Even SMB is not recommended.

For the one storagenode it’s overkill. For the fleet of storagenodes there is no benefit - you will have a one ingress for all of them, so they all will be treated as a one node - the income doesn’t increased. Moreover - each new node should pass 100 audits on the satellite to got vetted. While it’s vetting, it can receive only 5% of possible traffic. Usually the vetting process should take at least a month.
For multiple nodes they all will get only 5% in total, so the vetting process will take in the same times longer as a number of nodes.
I would recommend to start the next node when the first one almost full, in this case the vetting process will not take forever.

1 Like

Sure, was planning on using local persistent volumes.
And was not planning to run loads of SNOs. 2 is more than enough.
My reason to move to k8s is just because I am using more and more containers for various non storj related tasks, and want to move away from the “1 VM per container” approach I use now.

I see. However, if you do not have more than 1 worker node (remember - you should run the master node as well), the k8s is not a good solution. In case of one-node cluster it’s better to use the docker-compose instead, or even docker swarm
If you want just to learn something or experiment - then go ahead.
But I wouldn’t suggest to do that for everyone, it’s much more complicated setup than needed

I add my 5 cent:
You will have a huge overhead and slowness on Kubernetes with storage and network stack if you compare with bare metal servers. For storage node, latency and IOPS is very important, and I can’t recommend Kubernetes for build any low latency storage and network applications. It can be deployed on Kubernetes, but other nodes (and RPI too) easily beat your node, and you’re will have bad efficiency.

1 Like

A long time thread but just dropping a comment to share my experience on running a SNO on kubernetes. I ran 3 SNO on kubernetes for more than 1 year without any issue. I used k3s for my k8s distrib with a single k8s node. All of my HDD were mounted locally and consumed with a k8s PVC for each of my node. I developed a helm charts for that. I’ve got stability issue with my ISP but I’ll do it again very soon.

I agree with most of the previous comments. The complexity is much higher than just using docker but this setup is intended to kubernetes users. There is no watchtower equivalent for the moment but this is not a really big deal. The really benefit I see here is using the same infrastructure layer especially when running multiple nodes for monitoring, logging, deployment… and I’m sure we are not alone which administrate a kubernetes cluster with lost disk space and looking to take part to such a project!

1 Like

Thank you for sharing your experience. What is your storage backend (driver)? hostPath?

P.S. SNO (Storage Node Operator) it’s you :wink: . The node is called storagenode. The OP did the same mistake though.

I used the no-provisionner storageclass provisionner. It’s similar to the hostPath but with the advantage of using the same volumeClaimTemplate whatever the provider you’re using.

Thanks for the clarification about naming! :slight_smile:

1 Like