I am Using the Official Armbian Image, where it seems to be properly implemented. Actually i do not know how its implemented, and haven’t done any Performancetesting regarding this. But i just did a simple one, and it seems it sticks to the core dispatched initially:
arkina@helios64:/storage/bay3/internxtB/.xcore$ sudo docker run -it --name cpustress --rm containerstack/cpustress:armhf --cpu 1 --timeout 30s --metrics-brief
Unable to find image 'containerstack/cpustress:armhf' locally
armhf: Pulling from containerstack/cpustress
4cf805808e4b: Pull complete
e97da1f8c4d9: Pull complete
Digest: sha256:b6fccd863ae58fd5bc0dac3c3d7ddda07c7db165793fad558a65d36bc23d31fc
Status: Downloaded newer image for containerstack/cpustress:armhf
stress-ng: info: [1] dispatching hogs: 1 cpu
stress-ng: info: [1] successful run completed in 30.79s
stress-ng: info: [1] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [1] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [1] cpu 1084 30.79 26.52 4.06 35.21 35.45
but everytime started its another one, so i assume it does it only initially but not during the running workload:
That is so pretty. I have the original Helios4 device and although I’ve messed around with it a few times, it’s been mostly sitting disassembled and gathering dust. I couldn’t get anything to run stably on it for some reason. The Helios64 version is probably a huge improvement on the Helios4. Glad to hear you’re able to run so many services with it!
Hah @Arkina , you beat me to it! I’m currently experimenting with different filesystems on my Helios64 and trying to figure out what all I can do with the 4GB of ram (Possibly, K8S, although it will likely be K3S that I end up using).
Ah another helios user I went pretty straight forward with ext4 as i want to have it Up&Running asap ^^. What are you referring with K8S (Kubernetes?)/K3S?
I never hit the 4GB, i think i am limited more by the CPU, and therefore i do not want to put more workload to it (one node is not yet migrated).
No matter what filesystem you choose, you need to do your research into what can and can’t be tuned and then consult a few long time users of that filesytem for some advice on how you might tune yours. Predominantly, you’d want to tune around being more of a “file server” with an up to 2MiB solid block size. Tuning your filesystem to “better support” the sqlite database transactions is foolish since to the file system it’s a small, yet chatty and important, part of the overall span of the filesystem.
The analogy that comes to mind is this- would you just drop any old engine into a project car for a drag strip, or would you go talk to your uncle/grandpa/best friend that’s been wrenching for years and knows a thing or two before just buying a big block chevy for all the raw quoted horsepower? They may end up recommending a LS crate engine with a few bolt ons to get the power and performance you want instead.
I myself use ZFS as my backing storage technology, as does @SGC, and it works fairly fine so long as you don’t mess with it rudely do your research and find out the best practices for that filesystem. I actually advise against people using EXT4 at work, mostly due to the bugs and performance issues that keep popping up with either kernel or specific OS update issues and the data loss that follows. Small RAM systems I usually point to be xfs formatted for their filesystems (yes, I use on RHEL/CentOS).
K8S is the full fat Kubernates installation, K3S is a reduced Kubernates install. K3S needs significantly less resources, but does give up some things as well (drops less used extras that can be replaced by add-ons, looses HA support (K3S by default uses Sqlite, but there are ways to get HA back)).
Currently 3 nodes, one on each 3TB disk. Looking to add a node to the 6TB disk and can add another 5 by converting the 3 5.25in drives to a HDD drive enclosure if the ingress commands it.