Why windows though? Isn’t it more inefficient than using linux? Or is it just more convenient?
My testen was, that Windows is more CPU and Ram hungry than Linux and it used a quit significant portion of both just when idling
Because I know how to cook it, in compare to linux. I do not use doker, so it takes very less of cpu and RAM itself. i have up to 17 nodes on 1 server. just run 17 services. it is very easy.
also I have my own updater software, that make all easy and work itself.
I have recently converted all of my nodes from Windows to Linux, and did indeed see a significant reduction in CPU and RAM utilization, resulting in a lower overall power draw, which was the entire point to begin with.
Why did I not start on Linux? And why did I not start the migration before? Exactly like @Vadim, I felt competent in Windows, I knew my setup inside and out and it just worked. It took a non-significant amount of time to both automate the migration process and day-2 operations of the new Linux nodes, and if I was not super interested in the learning process in and of itself, I don’t think it would have been a wise investment of my time.
In windows My CPU is only 3-7% with 18 nodes.
I have 64GB ram here.
I dont know how it can be lower?
For the entire system, that’s very respectable. It was never about the Storagenode program in itself for me, it was about background windows tasks.
Most of my nodes are virtualized in ESXi, each VM with a single node, 10GB vRAM and 1CPU core. Windows machines were regularly maxing out their single core and even after my OSoptimizations of Windows, I had a hard time getting the base RAM utilization below ~2GB.
Is it a good idea to run a single OS for a single node? No, It’s not, but my StorJ endeavor was never about StorJ - it was about creating VMs that behaved similarly to the “real” VMs I have in my datacenter at work, and they provided a great oppertunity to enhance my Windows management knowledge.
I converted them all to Debian machines, bringing idle RAM utilization down into megabytes and CPU GHz utilization down from GHz to often double digit Mhz. I do believe that the change from NTFS to EXT4 has made the node process a bit more efficient, but I can’t say for sure.
Performance under Windows was fine - I never had issues with that, and it was always the idle consumption I wanted to be lower.
One virtual machine per node? I have my nodes all cohabitating on the same virtual machine. It saves on RAM. Haven’t had a problem. They are cleverly named after their access port, storj14002, storj14003, etc…
Honestly, it’s horrible.
Running nodes in a constrained VM with 2GB of ram is horrible. Nodes especially, but all processes pretty much work better in shared environment. So, jails and containers, let alone directly in the os. You have to use vm when you have to, for customer isolation, migration, things like that. You don’t have to do it at home. So why do you do it at home?
You are saying it’s a learning exercise, but you are not learning anything useful this way: cramming a service into ill-fitting configuration is not educational. If anything, you’ll learn bad practices, and system behaviors that should never exhibit if configured properly.
You are talking about resources but completely neglecting filesystem cache. That’s what is important. Not a number of processor cores or ram usage of the process itself.
And then to chose to do all of this on windows. For context, in my past life i was graphics driver stack and services developer on windows, and before that — sysadmin. I tell you with 100% authority, every second you spend dealing with windows it’s a wasted second nobody will give you back. You may not have a choice at work (even though you do — I’ve dumped employers for much less) but you do at home.
Why not use this opportunity to learn how to optimize the design for the task at hand, not cram the task into specific pre-selected configuration based on .. random choice? And yes, it’s still not about storj, it’s about all your other services — but if you design the system well — Storj will also benefit greatly. You can use it as a lithmus test of well optimized system. You shall not notice its presence.
If you want to learn how to manage VMs — there are better fitting services to run, that can tolerate such isolation.
Storj benefits from shared access to filesystem. FreeBSD jails is the right way to do it. Linux got something similar (LXC). Just don’t do docker.
Whats wrong with docker? How does it differ from LXC?
if app can run natively, then why need docker at all?
Oh wait… wait slow down …
…an app-isolation fight is about to break out in a vanilla “Why isn’t my disk full yet?” thread…
…give me time to grab some popcorn!
I find the management to be easier with all settings in one file. Plus Ive had issues with programs installing/updating incorrectly that causes a headache. Down/up the docker container and its like nothing happened. Plus I can put all data in folders I can manage and back up easily. And isolate whatever from the network as I want. But Im also sure all that is possible if I was a linux dev, but Im just a computer loving electrical engineer. Sooo…
I watch only from windows perspective, in linux …
LXC is OS-level virtualization. Docker focuses on application packaging and is a higher abstraction layer above LXC.
Originally docker used LXC for execution, but now they use a separate concoction (libcontainer? runc?), have a separate daemon (bloatware, its’ 100% unnecessary), implements cgroups and namespaces directly, and also provides adherence to another standard – OCI – (open container initiative) – as opposed to pure LXC and/or jails where you get roofs copied environment, with which you can do whatever you want.
Essentially, LXC, just like FreeBSD jails (but worse), provide lightweight OS-level virtual machines with strong isolation, albeit sharing host kernel – which is awesome, from the resource reuse perspective, including filesystem cache, in light of present discussion.
Docker and podman provide a way to package, distribute, and run applications (“one app per container” mantra). Docker, however, has a separate daemon process and is generally very bloated. Podman is much leaner, daemonless, and accomplishes the same thing; it relies on SystemD for scheduling and execution; and can run rootless. Docker cannot.
In terms of isolation LCX and Podman with Docker are pretty good, but not as good as FreeBSD jails.
The benefit of Docker/Podman is precisely adherence to OCI standard and application dependencies isolation, very useful for micro services: all complexity is contained (pun intended) and dependency hell entirely avoided.
Exactly.
However, @Vadim is right, that go applications in general, and storagenode in particular, are already self-contained executables with no dependencies. They do’t benefit from dependencies isolation, so why not run them natively?
Docker [worst]: systemd launches docker daemon, docker daemon launches container, container launches storagenode
Podman[better]: systemd launches container, container launches storagenode.
Native[best]: systemd launches storagenode.
Next step:
Do native inside a jail or LXC container (probably via LXD)-- to get VM-like features, including better isolation (not just fancy chroot, but also networking and device filtering, etc).
To summarize: jails and LXC is the way to go if you want vm-like experience without wasting resources. Docker/Podman if you have a complex application with 100500 dependencies that someone has already handled and wrapped for you. You pay overhead of duplicating user space environment but it’s worth it. At least you don’t duplicate kernel.
For storagenode this is absolutely not needed.
LoL @Alexey, please separate the thread…
Me too
I love talking with you, mr. Rabbit.
You’re eloquent, you’re well informed, generally a nice tone and provide a tonne of great information about for the entire forum.
You’re also very set in your ways, standards and positions to a point where it does not seem like a solution other than your own can be the right one. We’ve talked multiple times about why everyone should have a Free-BSD based, ECC fuled outcast enterprise box running ZFS providing storage, and storage only to a set of compute boxes to the rest of the house’s infrastructure. It’s a great solution, wonderful even, but a user has but a single Synology Box (which you’re also very vocally against, which I bring them up specifically) running RAID5 - it can also be the right solution for a user.
Honestly, it’s horrible.
Honestly, it’s not. And let me tell you, why it has been perfect for me You make quite a lot of statements in your text, so mine is naturally going to be quite quote heavy. Please bear with me
Running nodes in a constrained VM with 2GB of ram is horrible.
To that, I agree. But I’m not. Read my message again - they have 10GB RAM, I just could not get windows idle usage under 2GB
[VMs are bad … ] You don’t have to do it at home. So why do you do it at home?
I work as a VMware administrator, and in my younger days I spent quite a lot of time learning (now I spend a lot lot of time using :p) PowerCLI. I love that module for Powershell - and I love Powershell in general. I run VMware VMs at home, because I really enjoy toying around with my scripts. Some are on my Github, most are in my works repositories. Some have brought me a personal joy to develop - others advancements in my career.
[…] cramming a service into ill-fitting configuration is not educational. If anything, you’ll learn bad practices, and system behaviors that should never exhibit if configured properly.
It was never about what was inside the VM, it was always the VM and the surrounding infrastructure itself - just like my job.
- Can I effectively detect and combat vCPU Ready times?
- Is my datastore monitoring script successfully moving
.vmdk
disks to other datastores in the datastore cluster? - Are extra datastores automatically added to the datastore cluster, once all existing have had their space exhausted?
- Is the integration with the aforementioned standalone storage box solid enough to automatically request a new ISCSI LUN, have it mounted, ESX storage rescanned so a datastore can be built and joined to the existing cluster?
- Can my Powershell scripting running inside Windows automatically detect low space capacity, request extended disks from the datastore cluster, stop the service, resize disks, rewrite it’s own configuration and restart the node?
All of these questions are just a small subset of the problems I have solved in code, and is now fully automated in my home setup.
StorJ was (and is!) a near perfect piece of software to have running in my VMs, that I also use for testing. It generates CPU cycles, dynamically uses RAM and have requirements both for IO and capacity, in a way that running - just like my customers at work - but unlike customers, I have the ability to yank the power cord, to severely over- and undersize and generally do all those things that absolutely cannot happen at work.
StorJ running in VMs acts as close to an artificial customer VM as I need it to, for the tasks I have at hand, and it’s great.
Filesystem caches: good - Your setup: bad
Okay, I agree on this one. My implementation of the Storagenode software itself is not perfect, but once again, it was never about the nodes, it was about the VMs they ran in.
Why not use this opportunity to learn how to optimize the design for the task at hand , not cram the task into specific pre-selected configuration based on .. random choice
Because to me, it’s not about optimizing the task at hand - it’s about optimizing the VMs. My company have many consultants at hand who deal with the inner operation of OSes. I think I’d be fired if I took too close a look without an incident. I need to be able to provide as close to a flawless virtulization layer to my company as I can, and for this very non-random task, the setup I ran has been great.
FreeBSD jails is the right way to do it.
Just don’t do docker.
See prior point about you only accepting a single solution. EXT4 with docker in Debian is also a very right way to do it, as per the documentation, but you have peaked my interest though - what’s the sudden hate on docker? Or maybe it has always been there, and I’ve not seen it. I’m not asking from a place of rebuttal, but of genuine interest.
Cheers my friend
In my experience… anyone that uses a *BSD… is certain the BSD way of doing things is the “one true way”. If their BSD shipped something different tomorrow: then that new way would be the one true way. It’s not a matter of what’s right based on differing requirements or configurations: BSD’s are religions that demand compliance
So it’s not ARs fault. He just fell in with the wrong crowd: like joining a biker gang in your youth
Got it. So if it’s about managing VMs that contain misbehaving app – then sure. That’s a perfect ussecase.
Well, the edited quote does read that way. But there was a sentence in the middle about LXC. But truth be told, I only put there to prevent response like that. So thank you for reading between the line and seeing through my shenanigans, that’s awesome!
Yes, outside of purposeful mild exaggeration to drive the point, I truly believe that for a specific task there exist the one best solution and the one best approach. I’m not “set in my ways” for the heck of it, but through experience trying various solution I’v convinced myself that this is objectively and measurably the best. I’ve used windows, linux, android, macOS, freebsd, and even Synology, and yes, for hosting storage and other services at home freebsd is objectively better. I grew into preferring FreeBSD, not started there. I had no preferences, and like most people, started with Windows by default (ok, MS-DOS and then Win95, but that is very ancient history)
It took a long time to arrive to this position, and therefore convincing me that any new emergent technology is better will also take a comparable long time. It’s not stubbornness, but thoroughness. (Ok, maybe a little stubbornness). I am open to have my mind changed.
I’m also convinced that Synology is not good for anyone – but Synology marketing is so good, that at this point it’s irrelevant: if users are happy with inferior solution (though the magic of not knowing better) – I’m fine with them being content. But it’s not fine for me; if do something at home – it must be perfect overkill, which you will feel good about. Otherwise why bother? Might as well just pay someone else to do it for you.
I’m also an avid [gasp!] macOS user for the past 15+ years. But I got here, not started here. It was an active exploration and discovery, not rot and inertia.
Perhaps. Perhaps not. Perhaps it’s the opposite – people figure out the right way to do thing – and look at that, BSD folks just came to the same conclusion. So far, I like what they do and direction it goes, and I don’t like what linux does and where it goes. Both on technology, design, documentation, and even licensing fronts. With exceptions, of course: for example, what they did with eBPF on linux is very nice. But jails are still better than LXC, and I’m convinced BSD (and MIT) license(s) is(are) far, far superior to GPL. (I’m actively avoiding touching anything GPL 3+ with a rotten stick. But that’s just me, and an unrelated can of worms)
Nah, it was always there. I touched on that in the comment above, – but TLDR – It’s a commercial bloatware. There are better (read: leaner, easier, simpler) ways to manage containers, podman would be an obvious example, let alone most apps that home users run don’t benefit from containerization at all. Essentially, the for many people hub[.]docker[.]com and similar registries become just another “App Store” – a software distribution facility, but one that that adds another, often unnecessary, layer between the app and hardware.
This is especially egregious when windows users do that: instead of running storagenode natively they end up with a hypervisor, virtual machine, container solution, container, and only then – application. It makes no sense whatsoever, especially provided that windows has linux runtime (wsl1; wsl2 shall not have existed – but as always Microsoft murdered great idea) let alone storagenode specifically can be compiled into native executable.
Exceptions exist:- some applications do benefit from containerization – those that have many dependencies. Storagenode is not one of them.
(Another useful feature is separate networking stack – but it’s not specific to docker in any way.)
With the respect to MS they implemented virtual apps a long time ago, just to configure them and use is very close the nightmare.
So, WSL1/WSL2 for me was a solution to many demands, where my work (not only here) required to have Windows and Linux in the same time. So, I just got best of both worlds and would continue like this.
Right now I can easily switch to Linux, but why, if everything needed just works?
I’m not convinced to Macs, because the Terminal is not fully GNU-compatible and requires so many forces, that I do not want to
It’s very hard to appreciate anything they do when everything worthwhile they have even done they have ultimately give up on and kill off:
- wsl1: Great idea: linux user space with windows kernel. Tremendous. Then they gave up, and shoved full linux into hyper-v and called it wsl2, negating all benefits.
- Edge: great browser engine, did many things right. Then they gave up and shoved Chromium into the internals. Now it’s another Chrome but with Microsoft garbage on top
- Windows Phone – amazing innovative platform – both from UX and performance perspectives. Where is it now? Dead.
- Visual Studio. The best IDE on the planet. Then they gave up, and shoved Electron up everyone’s arses with Code.
- Windows UI: Why almost decade later there is still bipolar interface with both old and new UI intermixed. Why are there ads in my OS?!
- cmd. What is that? Even Microsoft gave up on that and yet, it’s what they ship..
- Excel still works. But now python/matplotlib/numpy replaced it entirely in my workflow.
So no. it’s hard to give them credit for anything when they shit on everything that is half-decent.
No reason. But you shall clearly understand the distinction between familiarity and convenience.
For example, if you are very familiar with system A, you can definitely accomplish things there faster/better/with-less-stress-and-frustration than in B; but not because A is inherently better, but because you are more familiar and effective with it.
I don’t know your use cases, but something tells me if you migrate from windows to anything else, after few months of struggle and adjustment, you will become more productive. Literally to anything else.
Anecdotally, I know quite a bit of folks who migrated off of windows and no-one who migrated to windows… Including few friends who work at MSFT (albeit in Office365 division)
Very puzzled by this comment. Usually people pick environment the opposite way: due to things that they can do. Terminal on macOS I agree is not good. Most people just use iTerm2, which is far superior than any other terminal I have ever seen on any other platform. If you mean the shell environment - yes, it’s not gnu, it’s rooted in NextStep and BSD.
I actively avoid gnu nonsense, so this factor was a positive in my mind.
The reason I picked macOS as my daily main machine is because of a) hardware quality and performance 2) apps that I like and don’t have analogues elsewhere (BBEdit, iTerm2, Transmit). And that it “just works”, is out of the way; does not fight me like windows, and does not waste my time like linux. Ultimately, it’s a personal choice, but please do realize that it’s often very easy to conflate familiarity for superiority/productivity. Back to the switching main OSes – it really does not matter, and in the end you will be able to switch between them just fine, and yet, you will still have affinity to one, and I guarantee you it won’t be windows
You have a fast CPU. I see in one of the posts that you’re using i7-12700 for this machine? Well, I’m getting very similar readings (8-10%) on Linux on i3-9100t (per benchmarks ~6× slower, and nominally half of TDP of yours). I would therefore guess if I ran Windows on that machine, it would be north of 30% on a regular basis—and I like having spare CPU for non-Storj tasks I run there. Not saying it’s wrong to use a more powerful CPU if that’s what you have, just giving perspective why would that matter.
BTW, NixOS is the new BSD in this sense. I have several coworkers who swear by Nix or NixOS. I admit some principles are enticing, but they have their own share of problems.
I used to follow the same belief, then I’ve realized that it pays off to relax some constraints to admit a less than perfect approach that can more flexibly respond to environment or business requirement changes. For example, despite that I share your sentiment towards Docker, I stayed with it exactly because I see the storage node software growing, and I would like to avoid more work if Storj decides to put some new hard dependencies into the Docker image that I would have to otherwise manually add to its potential replacement.
Icky!
Not so great. File system access was crazy slow exactly because the POSIX API does not translate well to Windows API. This was one of the two primary reasons WSL2 was created (the other being syscall compatibility). Like, 20× slower than native per Microsoft’s statements.
Oh dear. Was working with various versions starting from 2005 and I’m pretty sure this product alone is responsible for more than half of all the cursing I did within 10 meter radius of any computer ever. I think I’ve submitted a half-dozen bug reports as well, giving up when I realized Microsoft doesn’t care.
Honestly this is the only piece of software from Microsoft that I have used on regular basis in the past ~20 years and don’t hate at this point. Could be better? Sure. But this is the only no-code application builder that I’ve seen actually working long-term