STORJ VMs vs Docker Containers

i have yet to find a good way without using network protocols to allow a vm access to a folder on my hypervisor host…seems like a pretty basic thing… and yet everything seems to be based upon different protocols…

virtualization is “still” kinda new, takes a long time for a technology to really mature… usually about a century, i think you might be able to do something like that with spice… and then select the vmware based graphics or something like that… ill have to check up on that…
i tried it out, didn’t think the workstation player was a hypervisor, figured it was a remote desktop thing, which i guess is what the vmware graphics thing is for…

ran into it when trying to do gpu passthrough, which was… “fun” as in a week of server crashes and tons of reboots… didn’t even get it to work…

but i did find out that spice drivers for the vm graphics and qemu drivers installed would make it run microsoft RDP so well that it was almost like using a local machine… can even do 1080p youtube… weird thing is that the server basically doesn’t have a graphic card/i took out the lowprofile nvidia card because the system was unstable… not a good one since it’s onboard and 10 years old…

ESXi isn’t cheap, runs up in the tens of thousands for big setups, but thats why it’s so good also… you get what you pay for…

but i’m very happy at being angry at my proxmox for free xD it does the job and it does it well + debian is very likely to become the future of the OS world, with all the development being put into that distribution… i just used zfs and is very new to linux… but i don’t regret making the change…

and i made lot of considerations before switching… really i wanted to go FreeBSD
but proxmox was what would install fast and on the first try… fought a bit with FreeBSD and other BSD’s maybe it was me or just some bios configuration…

anyways just wanted to say i understand what you are saying, and i’m not saying there is a perfect solution… just if somebody is use to EXSi then making the switch to proxmox might not seem worthwhile depending on the use case…

if i hadn’t been hating windows so much i might have gone windows storagespaces instead of zfs… and i’m not really sure i made the right choice in switching… because i’m basically lost in linux most of the time.

i like all my new options and the ability to always dig down and find a way to solve the issue, but i also miss the simplicity of windows and how it was default configured to just work well in most regards… if you need a firewall you turn it on and it basically minds itself… barely have to open ports or anything these days…

windows vs linux is a bit like you buy a new car…

the windows car is big and nice looking and everything just works, tho you cannot really change what you want…

the linux car is like an entire garage, with the car in the middle, and some tools come with it, its sleek, has good performance, but it is a bit of a gas hog… also it doesn’t have the butter smooth suspension of the windows car… but it’s still a very nice car… then you want to adjust your mirrors and you find out you have to take the door apart and install the cables for the mirrors because well everybody just assumes thats how that is done… and then you can ofc select which kinds of cables you want and how to control the mirror…

i mean it’s not like the car doesn’t work without the ability to move the mirrors, you just have to tilt your head a little… so it works… you just cannot simply adjust them

XCP-NG is also a nice choice… it was very high on my list… but alas it’s not debian based… which means it’s lagging behind in features, which is the advantage of debian and why i went with proxmox… but XCP-NG is supposedly a “better” choice because of it being a much more mature system… but then again if one already is on EXSi sort of needs a good reason to change imo.

i was surprised on just how nice and simple hyperV worked… and when it’s a windows host, then the guests are so well managed in RAM usage and cpu usage, i was very impressed… until i wanted to do all kinds of passthrough… proxmox is so nice for passthrough…

even if gpu is still “experimental” and a bit of a hack to make work… everything else tho…
basically just select it and pass it through to the vm of choice… and it just works… or has thus far in what i’ve tried…

#This is not to burn you down to the ground, but mere a reminder on how fast shit goes…

virtualization is “still” kinda new, takes a long time for a technology to really mature…

Really?
Remember, it is 2020 now :wink:

That maturity, stability, and versatility the Xen Project created in its 15 years are second to none.

Initial release 2008

Initial release March 23, 2001; 19 years ago

I could go on…
It really is older than one might think.
We’re getting old, that’s it! :stuck_out_tongue:

2 Likes

a technology really works best when its about a century old, then it’s all butter smooth… just look at any most highly implemented technologies in history… ofc it needs users for getting development… maybe that can accelerate the process in todays world…

sure there will after that time often start to be replacements, that might be considered…
ofc integrated circuits have taken the world by storm, and still it has many issues… only today have it mostly become weather proof and able to do immense tasks while basically being powered on a watch battery.

i just mean that when a tech is finished there will be no crashes or issues or configuration bullshit… unless if one wants that… no meaningful amount of people to take care of making sure stuff works…

no continually repair, replacements and upgrades… stuff just works… unless if some idiot engineer decides he is smarter than a century of development and tries to reinvent… the “wheel”… which usually doesn’t go well or makes it into an artistic project lol, design over function.

ofc then there is the whole… they don’t build them like they use to factor lol… not sure if or how that would apply to programming tho… if virtualization can be considered programming and not actually a development on the integrated circuit technology…

I mean, it was nice to have you around, but it seems you have decided that anything you use to be able to access this forum in the first place isn’t going to be reliable for several more decades to come. I wish you well with removing all this unreliable modern technology from your life. There’s nothing wrong with reading a good book next to the fire place.

1 Like

planes is one of the most recent tech’s to become 100 years old… it’s a difficult not to say that technology is pretty well functioning now… but go back 50 years and it was expensive, and tho reliable it required skilled pilots and navigators… today well you should get the point the planes can take off and land themselves…

even integrated circuits are still rapidly developing, becoming more and more amazing every day…
because we are basically still trying to figure out what is the limits of the technology and what are the most optimal ways to use it and how to get the best economics of it…

doesn’t mean it doesn’t work, just means there is still a long way to go before it stop changing a lot and becomes something like a kitchen cooktop, stove or whatever one wants to call it a range… sure from time to time a new thingamajig is switched out with another… some new coating are added… but the sizes are very universal, the power input is universal, the maintenance is basically none existing and it can usually last for decades…

and why would we change it… it’s been formed into what it is over a century, not saying that it won’t get better… so long as there is development / users behind it.

but take people from 50 years ago and they can use the device without needing a manual.
thats a… maybe mature isn’t the right word for it… i could agree to that…

but yeah the development phase from the initial invention to that point might not exactly be defined by a century… it may be defined on how many human hours of development or hours of use has been put into something… but then again … the world clock ticks slow at times and introduces many curve balls along the way.

You just defined new as less than 100 years old when talking about software… That made the term meaningless as by that definition everything is new.

The products @mrkeyboardcommando mentioned are far from the first virtualization products. In fact I would call them the mature generation. And even those products are far more than a decade old. The first virtualization software dates back to the 1960’s. In the mean time the entire cloud infrastructure that all connected software runs on these days is built on top of virtualization. To call something that is so universally adopted for over a decade “new” is just… it doesn’t make sense.

Words have a certain context. On the scale of our planets existence, the human race is pretty damn new. So now everything humans have ever created is new. When talking about new in the context of software, there is no reasonable definition in which virtualization is anywhere close to new.

Anyway, that should be enough derailment for now.

Ps. You should look into induction cooking :wink: And you thought the stove was settled tech.

so… you still drive steam cars? only ride steam locomotives?
Throw your garbage on the streets waiting for the cleaners to sweep it up?

do you still use horses to farm your fields?

how do you upload these messages then? internet is not 100 years old yet? do you have a typewriter with internet access???

Virtualization has been around for more than 10 years… there were even betas in the 90’s when i was a kid and was only interested in lego.

I do understand if one says docker is too new to understand correctly… but virtualization is commonplace at the IT sector for the past 2 decades.

i do think things were better a couple of decades back… but virtualization is not one of those things.
cars… those were better!

and then the planes landing themselves today… i got news

Lockheed L-1011 TriStar
Lockheed L-1011 TriStar - Wikipedia

That plane could land itself in 1970…

still, this post is not to bash you… but you are very negative about things… cheer up! some things are getting better every minute, not everything.

1960’s virtualization…

My parents were born after '64… so i doubt i would have known that.

Thanks for the info, really thought it started in the late 90’s

It definitely predates me by a wide margin too, but wikipedia knows all.

2 Likes

it isn’t meant like that… by it being a more or less fully developed / mature technology i mean that the ways a technology changes becomes very different in the later stages…

also when i look at a technology i don’t regard every new bolt and wire as an innovation, i look at the fundamental purpose of the technology…

like if we take something like a plow, the technology in that is the plow itself and how it cuts into the earth and rolls the earth… that was finished a long time ago, now that technology is then being combined with other technologies such as increasingly bigger tractors and better materials to perform the job it was meant to do better and faster…
but its doubtful that the basic shape of a single plow shard will change much, compared to how much it changed to get to the point where it stopped.

in regard to the horseless carts, well the suspension in a car is very much the same as on a many centuries, even millennia old high quality horse carts, the horseless cart went through a phase of steam power, just like today we use electrons in our integrated circuits, while in the future it will be photonic circuits because they are superior, and they will also be in the class of integrated circuits… but i digress…

the car is unlikely to change much, now it’s the details and how we use them,
much like the induction stove, there really isn’t much new about it… just a different heating element, the stove or range will look the same and perform the same function.
because the function they serve hasn’t change and until culture changes there isn’t much point in changing a well developed design…

the point isn’t that 100 year old technology is superior, but the world is powered by steam turbines or turbines in general, again an century old technology and we use it because it’s well understood, very efficient because it’s highly developed…

not unlike modern solar cells reaching closer and closer to 90% or whatever is theoretically possible, think the best one can buy for a right arm is 68% while the cheaper regular ones are like 15-20% maybe a bit more… it doesn’t make the technology bad, but when it’s about a century old then usually it stops changing much, as people move on to other developments.

look at heatpumps and how they are gaining favor because we have become advanced enough to produce them in much larger quantity and much large sizes, also a old technology that was promoted by us wanted fridges that didn’t need to be filled with ice on a near daily basis.

it’s not that a 100 year old technology is better… its when a technology as a pedigree of a century of active development, then it becomes a near perfected technology, and then usually is coupled with something else to make other more advanced technologies.
and it rarely changes, because after a century spend on improvements, then there is often not much left to improve…

ofc that doesn’t mean stuff cannot be better… just like cars get better wheels, better engines… but it’s still a car shaped box with an engine that takes you from a to b in exchange for fuel…

the century development thing is why i went with proxmox… because it’s debian based and thus the most active developed platform today, while i would rather have wanted to install XCP-NG, but i didn’t think it’s going to win out and thus in 10 years i may have to change again…

with proxmox my gamble is that i might never need to change the server OS i use because i can pretty much predict that it should win out… ofc i will suffer a bit for that now…

and if proxmox fails then its basically just an add on to debian anyways and what i’ve learned will not be all wasted.

i duno if the century rule i’ve been apply to many other technologies can be used for stuff like virtualization… it may very well be that it’s more a factor of how many people is actively developing and using something rather than simple fixed time scale… but it’s usually pretty spot on for most technology…

and when i count a century it’s not a century of proxmox, its a century of virtualization… also one has to take into account, a lot of development doesn’t really happen until stuff starts going mainstream…

not trying to look for arguments or make stupid statements, i’ve studied development of technologies and the century thing is a take away from that…

but i suppose i still need practice in explaining it well enough and in a simple manner along with need to actually have some sort of proper naming for it…

ofc i’m also new to that so takes a bit to process and develop the concept, which usually ends up in one either seeming mad or stupid lol

i like a good open minded discussion tho and that people think for themselves… :smiley:

meh totally forgot to get back to the planes… tsk alas this is long enough already

yes… i got what you want to explain i guess.

But do not forget that when something is available for like 100 years, people might already have moved on. or being in the process of it…

I’m going to make a guess here.
I do not think there will be something like virtualization in 2066, at least not mainstream.
People will have moved on, onto newer inventions.
It starts with docker… moves to serverless… in a couple of years it is not even normal to host it by yourself. you need others that do it at megascale.

Just a thought…

It will be likely that virtualization is very very mature and pretty much at the end of possible development.
But that does not mean it is relevant any more.

Look at cars as an example
Cars have been around for a very long time. (1886 - Karl Benz - Benz Patent-Motorwagen)
That was the first car with an internal combustion engine.
People spend a lot of time and money to perfect the car and engines.
We moved to enclosed cars, so you won’t get wet when it rains.
Then thought about more power and 4 stroke engines.
Then some guy thought of a V8 (1914-1935 Cadukkac L-Head for the ones interested)
A couple of years ago, maybe even a couple of decades Hybrid electric cars came along (Prius :nauseated_face:).
People said it was done for the “Old” internal combustion engine.
Now we have these plastic monstrosities called Tesla’s, and they even drive themselves.
Are fully electric… and again people tell us to stop using “Old” cars.

One day, everyone wakes up in the morning to walk out his front door and just takes an available flying thing.

People have already moved on to newer designs and newer ways of things.
Nobody wants a 1886 - Karl Benz - Benz Patent-Motorwagen as a daily driver.
People “Need” their power steering, their airconditioning, their GPS etc.

People will move on.
And that happens in all sectors and on all fronts.
If you like it or not! (Spoiler, I…DO NOT LIKE IT!!! GIVE ME MY OLD CAR BACK!)

2 Likes

but maybe storj storagenode heads the other way toward single function appliance sharing a single disk.
In your example a model T with a 10,000 litre gas tank

1 Like

@mrkeyboardcommando
yeah my point is that when we switch the car out for a flying thing, then the car will leave behind will look very much like the ones we have now… because it’s an optimal design… ofc there some new engine comes along that requires a redesign of the whole thing, but still aero dynamics won’t change and nor will the size of people nor what we move around… thus the car doesn’t really change atleast in some aspects… stuff always changes, that is almost a force of nature, if not so…

@andrew2.hart the new 100tb ssd’s sure seems to point a bit towards that possibility…
doing stuff distributed only makes sense so long as we don’t have computers that run for centuries… and i mean mainstream used computers… ofc there are exceptions of computers that already is getting their way up there…

if a few systems exist and they are basically indestructable macrocomputers with more storage and computation than the world could ever use, then what is the point… but yeah not really there yet… and maybe we never will be, but it’s very possible that the mainstream use case will end up being something like that… because only physicists and theoretical mathematicians might eventually be the few scientists that actually might exceed the computational requirements of any system… since runs into the whole, you cannot simulate reality inside reality because reality will always be more complex than the simulation…

if we are thinking really long term… 100tb ssd’s tho… and with stuff more and more stored in the cloud eventually compression of this cloud data becomes immense, because no point in keeping duplicated data… if the storage system never makes mistakes that cannot be corrected, so at one point we might actually hit a point where storage becomes so cheap and the cloud compression so great that we all of a sudden has millions times the storage we need…

@ no one in particular…

god damn proxmox… why the … … do they have to put the … … … … shutdown button in the exact same place as the vm shutdown button… and why doesn’t it have a warning…
almost 20 days without a reboot tho… :smiley:

which was nice… my l2arc was just starting to get well trained…
and then keeled by a to rapid mouse fire… and not the first time i do that either…
more like the 5th

i wouldn’t mind a decade more of maturity on proxmox… then i’m sure it would be much nicer… i was also, so close to just installing XCP-NG when i picked proxmox… and often i do kinda regret it… don’t feel like starting over on a new distribution tho… now when i’m finally just getting sort of comfy with this one…

anyone tried both proxmox and XCP-NG and if so what is the verdict?

i tried proxmox… and Xenserver… before it was killed off / closed off.

AFAIK is XCP-NG basically still the same as Xenserver then… but upgraded.

i’m planning to test run both… as my plans for more nodes is changing by the minute.

NEED TO MAKE THE RIGHT CHOICE :roll_eyes:

1 Like

yeah, i think it’s debian simply because of it being the most active developed, and thus promox or whatever else one want to use on debian for virtualization, i think will be the best long term choice…

but XCP-NG is much more mature, proxmox has so many things that just makes one want to destroy stuff… so i’m it might be very much down to if one wants new feature if can make do with those there are and don’t plan to stick with it for like say a decade…

wendel from Level1techs also says XCP-NG is a very good option, he barely even mentions proxmox if memory serves in a discussion about virtualization, but i think that was because it was enterprise minded…

it seems that the concensus is that if one is doing enterprise level stuff or learning towards going into the enterprise, then one should focus on ESXi or like XCP-NG tho he mentions that XCP-NG tho he migh call it xen, is kinda dying… but it’s still very good…

i think proxmox has a bright future, it’s getting popular, it’s on debian… the real question is… will what is learned from using it from now on be worth the pain…
i might go into my proxmox gui and simply cut the function of the server shutdown button… i can do that from terminal… and i hate clicking it by mistake…

moving virtual disks between vm’s is also just annoying… i mean i get that they build the gui so that if one uses the gui, then nothing can basically go wrong… which is nice…
and it’s stable as a rock… tho i have managed to make it crash from weird configurations that apperently proxmox wasn’t stable with… on MY VM’s i don’t really get why a vm can crash the server… but maybe it’s because of all my paravirtulization.

thos paravirtualization drivers tho… my god how can resources for vm’s be next to nothing…
if i run a regular vm i can feel it’s kinda slugish … maybe my server thats just old… but kick the paravirtualization drivers into gear on the vm and it runs through Remote Desktop like a bare metal install…

and dedup on ram… they claim this enables you to run 3 times as many vm’s on the same host, thus far i’ve not found any limit for me really… am running 6 vm’s currently more or less constantly, and i do kinda feel it a bit on my ram utilization… i can see my l2arc sometimes gives up a lot of its allocated memory, but been experimenting with ballon drivers, or dynamic memory management… where the host and the guests move around and request ram depending on utilization.

so that not only is the ram there is actually used, dedup so that running 1 debian guest is basically the same as running 6 debian guests in ram utilization… ofc then there is the free ram…

with dynamic memory management the ram is registered as guest but used and then it can request more ram, which the host then allocates, the host will then try to stay around 80% ram utilization… which is pretty easy with zfs, because zfs will just eat all ram it’s allowed to…

i got 4 windows guests running currently and 2 debian guests with a full setup of maybe 12 vm’s
but still kinda developing my infrastructure… i kinda like the dynamic memory management, but having trouble making it work as i think it was suppose to, but maybe i just have to much free ram atm…

my server has 16 threads and 48gb ram, so nothing amazing but enough to play around a bit.

Yeah… that dedupping of the RAM is a main selling point for proxmox.

XCP-NG has more features if one wants to build a cluster.

I use Hyper-V here to run all VM’s, have it in a failover cluster and it has been pretty painless since the start.

I might transfer from hyper-v because of the costs

1 Like

Sorry All. I’ve been away and then sick and am just catching up on this thread now, so please forgive me from bringing this topic back from the dead.

In the end I decided on a mix of what was discussed here and settled on running HyperV on Windows 10, and running 1 Ubuntu VM on it with all of my Storage Node Dockers running on that one VM. Each docker will gets it’s own 6TB drive (no raid) and I have room to expand this system to 10 nodes/drives over time, but I don’t expect that anytime soon.

If you have windows 10, then you dont need VM at all, I made Toolbox, that can install more than 1 windows GUI node to windows, i ran 5-7 nodes on one PC, work fine, minimal overhed.

1 Like

I’ve played around with your toolbox for a bit when you first talked about it and based on using it for a few hours it’s a great tool, but in the end I wanted to keep all STORJ traffic on it’s on firewall and it’s own vlan with no access to my network. I run a separate instance of pfSense for STORJ and then vlan it’s traffic across my switches. Sure I could just vlan it off of my main pfSense, and in the future I might but for now it works.

The Windows box is on a management vlan on my main network, but the VMs are on that other network. I have a longer term goal of getting down to one firewall, but with everyone working from home (and one 7 year old who is addicted to youtube) I don’t want to bring the network down at the moment to make a major change.

1 Like