ETA for Linux Nodes?

Please provide native support for Linux community, and inform about estimated time of announcing this feature.

Btw I have Arch Linux.

1 Like
2 Likes

it’s usually a week after the windows release, which was like 3-4 days ago
so the linux docker v1.17.4 should be rolled out around the 1st dec and then it takes like a maybe 72 hour for all nodes to auto update i believe…

approximate numbers.

you can track the docker version and last release here

i duno of a better way… maybe using the forum announcements of the new versions and then add a week, but not sure how accurate that is… they are announched when the storage node software / bundle / windows version is released.

and then ofc sometimes there are delays because of issues.

I know that Docker is avaliable. I asked about native linux application

On this page:

you can see “coming soon” next to linxu and mac os.

kinda forgot about that… thought you was talking about the new docker storagenode image release date, it’s the usual question :smiley:

that’s interesting, might be useful not sure tho… kinda depends on which container tech that’s better or how many nodes one plans to run… but i suppose there are tons of people that hate it has to run in docker because of maybe having problems with running it in a nested container configuration.

i know my setup into that wasn’t to smooth and still got some kinks that should be ironed out… ofc with this on the horizon it might not be worthwhile to actually try to fix it… because i would much rather run a native client in a container instead of a docker container within a container.

i will look forward to seeing when this becomes a thing thats for sure.

Running with the binary is much easier then one might think, It can literally run on anything, I have been running it now for a month it updated twice on its own with no issues. Docker always can have issues running if you update the OS and end up bricking something.

1 Like

so could one run multiple storagenodes on the same machine when using the binary… i mean without the containers or vm’s.

all software will have bugs. issues and oversights.
might not be there now, but it will come eventually…

ofc the same is true for docker and the binary should have less complexity, but then again docker nodes won’t affect each other by mistake, because of the separation of the containers.

and the binary on the other hand has less points of failure… but i would certainly use a binary because it fits better into my setup i think…

when it’s easy to install… plenty of stuff is still difficult for me in linux, so i’m not looking for more pain lol

Yes you can run more then one node with binary you would need to setup a service for each node though. But it wont have any overhead like running though a container though.

1 Like

As a side note, I hope there will be a clearly documented migration path from docker to the binary ^^

If we have to. Maybe we’ll be able to carry on using docker… ?

I for one am more happy with using docker than the native binary. I moved to using as many docker containers as possible and as little native binaries as possible because it makes it easier for me to maintain and reinstall on a new installation.

4 Likes

i doubt they will remove docker support, for the reasons that kevink states below.

docker is a great solution for those without a hypervisor and it’s basically zero setup, even if the docker images will just be some lets say debian server OS container with the binary installed… which is most likely what it is now, or maybe freebsd for legal reasons.

another advantage with having docker as the hypervisor is that everybody will be running basically the same configuration.

while a hypervisor OS or other add on software used for other OS containers / VM’s,
might have different configurations and issues, making support and problem solving more difficult.

for me the binary makes good sense, because i’m using proxmox and thus either i must install docker directly into proxmox or into containers, the former being against the recommended proxmox configuration, even tho it seems to work fine… and the latter making it a nested container configuration which isn’t optimal either…

at the moment i’m running both, mainly because running nested docker wasn’t straight forward, and still have issues that i need to solve, but with the binary on the horizon i’m not sure ill bother.

tho even with the server OS (proxmox) debian complaining every time i launch my nested solution, then it seems to work fine, but haven’t been brave enough to migrate my main storagenode.

1 Like

The plan is to implement an updater for docker version too. It would be running alongside with storagenode in the same container. So, it would be able to update the storagenode and itself.
You will not need to migrate if you do not want to.

3 Likes

being able to update itself is a standard feature in FreeBSD i believe :smiley:
it can even reboot it’s kernel while running i think… which is partly why so many routers and such use it as the base for their OS.

maybe something worth considering… and ofc it’s to my knowledge fully compatible with linux.

Most linux systems can update everything except a few basic kernel modules.
But a docker container doesn’t need to update any kernel modules. It’s easy to run the updater inside the container and let it replace the storagenode binary.
Docker images are typcially based on alpine linux because it is very small. However, there’s never a whole OS in a container.

you still reboot / the system… with FreeBSD you can have 24/7 uptime or thats the point of the feature to my understanding.

when being able to reboot the kernel with the old kernel running, your program is basically coded to step from one kernel to the other, so both are running at the same time and and then on a point in time the old one stops or pauses and the mirror on the new kernel starts.

giving like nanoseconds of downtime, i’m unaware of any other OS / software that can do that and essentially be able to replace everything below without rebooting the hardware, but rebooting and replacing the entire software with practically zero downtime.

and it’s all built into a free to use OS

basically it’s what they are trying to make the storagenode do, just that it’s all already build and free to use.

Ubuntu server can do that too, it’s called live patching: https://www.omgubuntu.co.uk/2018/04/enable-live-patch-kernel-updates-in-ubuntu-18-04

But for storagenodes you are confusing totally different mechanisms… To update a storagenode binary you don’t need to reboot anything. You simply reboot the service running the binary which is a downtime of a few ms and there is no kernel involved anywhere.

i was thinking what the point was when one could just as well have done it with freebsd or i guess ubuntu in a docker image… thats all…

and even tho you run the binary your OS would have to be FreeBSD or ubuntu to get the same advantage and patch security issues as they arise… if run in a docker container the internet facing OS would be able to be updated without rebooting if freebsd or ubuntu, with a binary inside it… making that solution imo advantageous.

because of the high protection against zero day exploits across the network… ofc every node would then be the exact same… which would also be bad…

the host OS is a user choice and the OS in a container doesn’t matter in that regard.

1 Like

does if the container OS has an exploit that doesn’t get fixed, with the ability to update the container the server can most likely be left running for longer without security issues, if only the containers are the exposed surface.

so from a security standpoint i would say it matters quite a bit, and even more so when trying to think of it from a HA perspective, you wouldn’t want to shut anything down if it could be avoided.

and isolating the hardware using VM’s as the exposed surface is a nice way to solve or improve a HA solution, and if one ran freebsd or ubuntu in the exposed vm / containers then one could “reboot” these “devices” and update their kernels and storagenode binary.

thus having a setup that could run for years with like one second of downtime… then hardware upgrades and other stuff becomes a much bigger factor i guess… if ofc everything was stable.

There is no kernel in a docker container. And the only things exposed to the internet in a docker container is the primary application running on it.

I could explain a lot more but why don’t you do some research on how docker works… I’m getting tired of correcting you all the time…

Let’s just say there is a good reason why most images just use alpine as a base.

E.g. http://crunchtools.com/comparison-linux-container-images/
There are of course more considerations needed for a base image and there are difference in security aspects. There’s never “one ring to rule them all”

2 Likes