About this docker thing. There is an amazing saying in Russian, my favourite, in fact, describes so many aspects of life: “мыши давились, кололись, плакали, но продолжали жрать кактус”, for which I don’t know correspondingly expressive analogue in English — but the literal translation would be “The mice cried, winced, and pricked themselves, but kept eating the cactus.”
Why are you all good fellas inflicting this suffering with containers in general and docker in particular upon yourselves? What are all those layers for? Are you deploying nodes into migrate or clusters? You have one node on a raspberry pi, just run the (already self contained) executable. Come on!
For IT morons (mice?) like me it’s a lot easier to do, and I’ve not had much trouble with it…
(Especially if you run multiple nodes in the same machine)
It’s a trade of an illusion of things seeming to work now for future headache of triaging breakages in all those unessesary extra layers of complexity. It’s not a good trade.
Storagenide does not need dependencies isolation containers offer. It does not need swarms. It’s a classic daemon listening on a single port with a config file and disk access.
Use it as such. Write a service wrapper for your os and avoid all that waste of time and resources.
I don’t believe that there is any justification to use 2gb orchestrator engine (that also forces a vm on some os including windows) where a 10 line text file suffices.
“I don’t know how to write service wrapper!” Is not a valid objection either. There are examples on the internet or ask agent to write one for you.. This is a one time hassle. And debugging 10 lines of text is somewhat easier than opaque third party bloated commercial trash that is docker.
Most issues people discuss here are docker issues. That should tell you something.
I think you’ll find it is. I don’t even know what that is.
I have no interest and little time as I am not an IT professional. I understand that it might make your IT-savy eyes bleed but it simply isn’t a problem for me. Containers work, are easy to deploy and so far have given me no problems.
It may not be the elegant solution of the engineer but it’s the functional solution of the pragmatist
Well, not broken — don’t fix. Maybe you have that golden config docker qa tested, where nothing is visibly broken yet. Keep it.
But when it inevitably breaks - don’t fix. Migrate to running software directly, without unnecessary crutches.
This is learned helplessness and I reject it. “I don’t know” is not a thing anymore. AI leveled the field in terms of skill for everyone. “I can’t be arsed to deal with it” — that I can understand. And if you can get away with it and the inelegant contraption does not bother you - I don’t have a problem. Of course it will work for some, otherwise it would not have passed however shitty QA (my contempt towards docker is probably palpable )
Btw I still don’t know what that means. I don’t think I am related to IT in any way. I don’t consider IT professional, hobbyist, or otherwise. The same way I drive to work every day in a car but I’m not transportation professional, if that makes sense. And I approach choosing car with the same attitude and that’s why I drive 14 year old one because there is nothing worth paying for on the current market at any price point, but that’s a different rant.
Also, please accept that there is a line between ignorance due to laziness and ignorance in certain subjects because learning others is higher on a list of priorities.
I will certainly take your advice to heart, though, to look deeper into “freeing” myself from docker if and when things break.´
Funnily enough, I did try doing that when I was running single nodes per machine quite a few years ago but the added complexity of running multiple instances without the container to make things simpler made me go back to Docker. So I do appreciate what you say.
Containers may not seem that useful if you only have a single node. But the more you run, the more value you see in having everything-in-one-directory (for easy migrations), or isolating each to their own network, or having them wrapped up with their own ddns management.
If they are not useful for one — they are not useful for five.
Storage node does not require containers and is they don’t simplify anything. They make managing it more complex. There is difference in user convenience running storagenode binary with parameters vs docker run … storagenode with parameters. The latter is more com0ex, not less.
None of what you listed requires or is simplified with containers - you still have to assign ports and manage directories.
But if you do want containers for some other reasons not listed here - use podman or LXC/ jails. Definitely not docker.
It is useful for one. It’s more useful for five. Guess what I think about the number going up even more?
Container-level isolation is always useful: especially with apps that self-update, and are handling bulk data from the Internet. Your OS provides features that can prevent apps from touching any filesystems or networks you don’t want them to: why wouldn’t you confine services to their own environment? Containers are just shims that make using those features easy (instead of scripting namespaces, cgroups, fs overlays etc).
Assigning ports and managing directories… are the most vanilla tasks for any apps that touch a network. If you want to wrap an app in it’s own filesystem+network, have it on it’s own VPN, and have it update its own DDNS… and still be as simple as a directory you can move between systems… containers are a great solution.
Ah… so this isn’t a container issue for you… it’s specifically a docker thing?
If it doesn’t work in docker/podman, put it in LXC/jails. If it doesn’t work in LXC/jails, put it in a VM. Isolation is effectively free these days.
This is one of those reasons you would use containers for, if you wanted this feature. No contradiction.
It’s not not necessary, but may be desirable for some. It’s a far cry however from “ui lets me configure ports better” reason many provide.
Yes. It’s specifically a docker thing. Becuse as you rightly say, all those features are free in modern OSes, but docker manages to make them not only expensive but also buggy.
I would be very happy if every program that I run or I need in the future would be containereised. Docker or something similar. It would make the cleanest OS without leftovers allover the place. Now, in Win and, as far as I’ve seen, in Linux too, when you uninstall a program you don’t get the same system as you had before installing that program. Removing a container does exactely that.
I don’t have expirience with any other alternatives than Docker, and for me it works and gives me what I need.
Go applications are already monolithic executables without dependencies.
Not really. All the data stays. It’s not different.
This is not a container vs not problem, it’s a windows vs everyone else problems. Windows developers are sloppy and shit with data everywhere indiscriminately, and rightfully earned that reputation. Very few of developers I spoke to over the years were aware of windows application design guidelines, let alone read one. This problem does not exist on oses made by adults.
macOS took it one step further — apps keep all the dependencies in the bundle (except Google Chrome — screw that shit) so deleting the app removes everything. A they are sandboxed by default. All the behaviors you wanted for free out of the box.
Linux has AppArmor and SELinux for the same reason.
You are trying to fix bad OS by bolting on more complexity in the form of containers. Instead, user a different OS.
Windows is dead. It’s a cesspool of broken concepts, crap design, even before we talk about ands on desktop. Accept it. Move on.
I’m a former windows developer, if that helps with credentials.
And then someone has a brilliant idea to run node in the turd sandwich — between windows and docker. With obvious outcome.
I actually don’t remember which DOS it was, but it was 286 processor, and then we got shiny new i386 at the univestity, and Dos Navigator was all the rage, with the most amazing Tetris implementation I have seen to day. That I remember, and then it’s all blur, and now we have AI.
When I started my node, docker was the only option allowed. I didn’t like it, but I couldn’t really do anything about it.
Later Storj released binaries that did not need docker, but now changing it may lead to problems, so I keep the docker.
Normally I just use separate VMs for this. For Storj it’s a separate VM and docker.
No. Container coudl have been the only option perhaps, (even though I highly doubt – whatever executables storj put into container you coudl have ran outside), not docker. Docker is a product of a private company, one of many container orchestrators, and not the best one by a large margin. They have the best marketing, yes. But this hardly matters in technical discussion.
This is not a good approach. By using VM you are wasting resources hosting separate kernel, and negating any benefits of filesystem caching.
Modern OS support sandboxing. Storagenode is self-contained executable wit no dependencies. Sandbox the storagenode and you don’t need containers, let alone vm. VM is a horrible idea for a storagenode. Keep it simple.
It was the only version released. Yes, I could have taken the binaries and run them outside of docker and done that after every update, but that was not supported by Storj and would also have been annoying.
And if I ran into issues, I would have more problems getting support because Storj would not like my different setup.
For me, a VM is the simplest option. It behaves just like if it was on separate hardware. No need to figure out sandobxes and make different programs work on the same VM/server.
I want something - I create a VM, install the OS it needs and that’s it. I could set up separate servers, but this is simpler.
It may be a bit slower but whatever, my hardware is rather old and can still handle it.