ETA for Linux Nodes?

sorry i thought docker used containers like in other hypervisors, doesn’t really make the security aspect of it much better does it…
meaning docker would inherently be insecure on a limit hardware HA setup, unless if one is able to reboot the kernel without rebooting the computer, because else the outdated OS would be exposed network surface.

WAIT a minute…

alpine is an OS… almost got me there… “it’s just a program…” yeah using a OS as a base :smiley: so a container OS, which means exposed surfaces with inherent vulnerability depending on software versions.

it doesn’t matter how you think about it, there would always be an OS layer which if outdated would make the exposed surfaces prone to security issues.
and thus would require updates.

there will always be a need for updates :smiley: But updates to a container are done with a downtime of a few ms.

@kevink u got any idea on an ETA for a production ready Linux installer?

no, because I don’t work for storjlabs, sorry :slight_smile:

I don’t really see any need for Linux version. For me the docker solution is perfect. I mean, if you really don’t want docker, you can just shuck the program out anyway. or build from source

Perhaps you want the gui on linux? (wonders about WINE)

You assume everyone ca run docker

One of my NAS boxes won’t run docker anymore (it’s on an older Linux kernel), so the option of just running a binary would be ace.

Can’t wait to give that a go :smiley:

root it and install freenas or unraid
not sure thats an option, but in most/many cases it should be…
ofc not always easy to do depending on what skill level one is on when starting out…

That way madness lies.
Easier to just install the binary when it’s available (assuming it’s going to be available for ARM architecture)

1 Like

The binary is already available you can already use it. Even for arm.

Oh, I would need an installer. I’m not clever enough to do it the hard way :wink:

Glad to hear that ARM will be supported, though!

1 Like

It is not that hard. You can find all the commands you need in the linked thread. You could also asked if you have any problems and I am sure the community will be happy to explain you everything :slight_smile:

3 Likes

Thank you
Seems relatively easy, I’m giving it a go on my old NAS.
Debian installer package would be lovely but the instructions for a “manual” installation seem easy enough :slight_smile:

1 Like

Quick update

Yeah, slightly more cumbersome but perfectly doable and seems to be online on my old ARM NAS which can no longer run Docker. I am a very happy bunny indeed! :smiley:

3 Likes

Docker is just a management layer on top of Linux containers, the same things LXC, podman, etc. manage. So what you’re really talking about here is the security of Linux containers, not Docker per se. A Linux container is basically just a process run with separate namespaces and a cgroup. It’s extremely lightweight.

With both virtualization and containers you still have shared hardware underneath, so there is always the possibility that the virtualization/container “sandbox” can be escaped.

The Linux kernel supports live patching, though not all changes can be applied in this way.

Close, but not quite. OS typically refers to the entire stack, from the kernel down to the userspace support. Containers don’t run a kernel, so the only thing different there is the userspace stack. You can run an Alpine container on a Debian host without an Alpine kernel; the only part of the Alpine OS that runs in the container is the userspace portion (libc).

The “OS layer” in most Docker containers is basically libc. Not that there aren’t sometimes security issues in libc, but most of your attack surface is going to be the kernel and the application service. With Linux containers there is only one kernel (the host) which can be updated as usual, there is no additional kernel. And the application service is provided by Storj, which we already update fairly regularly (and can be done with only a few seconds of downtime if you know what you’re doing). And those libc updates will get handled when Storj builds new images for storagenode.

Given that a node can be updated with only a few seconds of downtime and there is fault-tolerance built in to the network, the kinds of high-availability you’re suggesting provided by FreeBSD are a complete waste of time. They are useful in other cases where even a momentary loss of service is unacceptable, but the Tardigrade network is built specifically to tolerate node unavailability of way more than a few seconds.

4 Likes

and so the kernel is basically directly exposed online through the container… which is kinda my problem… not sure i got the horsepower for running what i want in vm’s tho… so might not be worth speculating anyways…

didn’t know one could live update the kernel, that does help a lot, because then atleast i don’t have to suffer server reboots to often, because my server is old and slow at rebooting, easily takes 10-15 minutes, and then the l2arc will take days or weeks to warm up.

kinda like the approach where one uses vm’s to shield the kernel and hardware, thus being able to software patch everything and reboot without really affecting the server, my ipmi is also terrible.

nah, it’s very rare i actually know what i’m doing, lol by that time i’ve moved on to new stuff and when i come back a handful of years later so much has changed… :smiley:

just discovered the company liquid today, they are apparently doing like the hardware version of containers, which looks interesting to put it mildly… not really home user stuff… but almost without a doubt the future of central computing macro structures or small datacenters.

imagine taking all the hardware out of the computer, sounds crazy… but without a doubt the future…on the large scale of that i have no doubt.
more stuff to understand… yay

A quick update.
I’ve now been running the binary on my ReadyNAS 214. Pretty low specced ARM CPU. I am impressed by how little resources (RAM and CPU) it’s using.
I think I’ll definitely be deploying more storage nodes on older NAS boxes around my family :wink:

EDIT: Just also wanted to add that my knowledge of Linux is… how shall I describe it… “superficial” and I managed to install the node and updater so don’t let it deter you if you’re considering it :slight_smile:

Yeah I was this surprised as well, because I’m hosting it on a NAS you would never think a node could even run on.

1 Like

do keep in mind that the demands of the storagenode hardware do tend to grow with it’s size, so it can be difficult to evaluate absolute minimum hardware specs unless if one is willing to risk a big node.

i’ve seen my node use 2-3 GB of RAM because of high disk latency or other activities, i’ve also seen it use 50-60% of my 8 cores without it being iowait, ofc that doesn’t mean it’s required for it to work, but it certainly doesn’t make it more unstable to have plenty of resources for it…

ofc it’s always interesting to hear what kind of experiments others are doing and what they can make work.

don’t be mad at me but your setup is so unique and had some many weird problems, I wouldn’t apply your observations to any NAS :smiley:

1 Like

conclusion:
so yeah i don’t think so but maybe

rants and reasons
it’s a good point actually… i was just about to argue against it, but then i got to thinking that it’s all containers, so it is “essentially” running on the host OS, but i don’t think the docker storagenode cpu and ram utilization would be much different tho… but maybe.

ofc a device without 20 gb ram to use when it feels like it, wouldn’t do that :smiley: i’m just saying those kinds of things may lead to instability, if say a drive started acting up…

and ram utilization does seem to increase with iowait, which is hdd latency, which when one has 14 hdd’s is pretty common that something it acting up… finally fixed one hdd and now i got another one thats from 2019 acting weird…

tried to disconnect it and nothing happened… been pondering if i forgot it was in there at i actually have 15 hdd’s connected lol yeah unique problems i know it happens lol, but isn’t a storage server basically a NAS just a matter of size.
if i installed freenas would my system then count as a nas :smiley:
NAS is a feature and buzz word more than an actual thing kinda

so weird about that drive tho… i mean zfs has been very reliable in complaining when i pull hdd’s, so was a bit surprised when nothing happened…

i’m good at finding problems, and if there isn’t any then i usually end up making some… lol

if i just didn’t change anything and didn’t try to make my system more advanced i’m sure it would have run without any errors…

alas i would hold that it should matter much a docker storagenode on debian 10 would use the same ram as on anything else using debian and most likely even on the NAS arm systems … but might be some deviation… would be interesting to try and compare, see if there is infact differences in running like arm vs intel

the data would be the same tho… because it would be the storagenode data… but it might be stored or processed slightly different… ofc computation would be different, tho ram is the same so might work pretty much the same… tho amd do have other memory features, so i suppose the data might be arranged a bit different in the ram leading to differences…

but i think the deviation would be minimal, computation doesn’t compare well tho… arm and x86 is very different

and then again comparing computation is just hell even on intel only