Post pictures of your storagenode rig(s)

Looks like they opted for a big.LITTLE implementation with 2xA72 performance cores and 4xA53 efficiency cores. I wonder what that does for performance overall compared to the Rpi4’s 4xA72 performance cores. Might require some effort in making sure there is proper big.LITTLE support in the OS you’re using. It should be able to run all 6 cores at the same time right? Do you happen to know how core limitations work on such an architecture? If you limit a process to 2 cores, will it still automatically optimize which cores are being used for that?

I am Using the Official Armbian Image, where it seems to be properly implemented. Actually i do not know how its implemented, and haven’t done any Performancetesting regarding this. But i just did a simple one, and it seems it sticks to the core dispatched initially:

arkina@helios64:/storage/bay3/internxtB/.xcore$ sudo docker run -it --name cpustress --rm containerstack/cpustress:armhf --cpu 1 --timeout 30s --metrics-brief
Unable to find image 'containerstack/cpustress:armhf' locally
armhf: Pulling from containerstack/cpustress
4cf805808e4b: Pull complete
e97da1f8c4d9: Pull complete
Digest: sha256:b6fccd863ae58fd5bc0dac3c3d7ddda07c7db165793fad558a65d36bc23d31fc
Status: Downloaded newer image for containerstack/cpustress:armhf
stress-ng: info: [1] dispatching hogs: 1 cpu
stress-ng: info: [1] successful run completed in 30.79s
stress-ng: info: [1] stressor      bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
stress-ng: info: [1]                          (secs)    (secs)    (secs)   (real time) (usr+sys time)
stress-ng: info: [1] cpu               1084     30.79     26.52      4.06        35.21        35.45

but everytime started its another one, so i assume it does it only initially but not during the running workload:

1 Like

I think you would need to run a stress test locally instead of running it in docker. Or simply adding 2 cpus instead of 1.

1 Like

That is so pretty. I have the original Helios4 device and although I’ve messed around with it a few times, it’s been mostly sitting disassembled and gathering dust. I couldn’t get anything to run stably on it for some reason. The Helios64 version is probably a huge improvement on the Helios4. Glad to hear you’re able to run so many services with it!

1 Like

Hah @Arkina , you beat me to it! I’m currently experimenting with different filesystems on my Helios64 and trying to figure out what all I can do with the 4GB of ram (Possibly, K8S, although it will likely be K3S that I end up using).

Ah another helios user :slight_smile: I went pretty straight forward with ext4 as i want to have it Up&Running asap ^^. What are you referring with K8S (Kubernetes?)/K3S?

I never hit the 4GB, i think i am limited more by the CPU, and therefore i do not want to put more workload to it (one node is not yet migrated).

Please, take into consideration:
https://forum.storj.io/tag/btrfs
https://forum.storj.io/tag/zfs

I’ll add some point to Alexey’s references-

No matter what filesystem you choose, you need to do your research into what can and can’t be tuned and then consult a few long time users of that filesytem for some advice on how you might tune yours. Predominantly, you’d want to tune around being more of a “file server” with an up to 2MiB solid block size. Tuning your filesystem to “better support” the sqlite database transactions is foolish since to the file system it’s a small, yet chatty and important, part of the overall span of the filesystem.

The analogy that comes to mind is this- would you just drop any old engine into a project car for a drag strip, or would you go talk to your uncle/grandpa/best friend that’s been wrenching for years and knows a thing or two before just buying a big block chevy for all the raw quoted horsepower? They may end up recommending a LS crate engine with a few bolt ons to get the power and performance you want instead.

I myself use ZFS as my backing storage technology, as does @SGC, and it works fairly fine so long as you don’t mess with it rudely do your research and find out the best practices for that filesystem. I actually advise against people using EXT4 at work, mostly due to the bugs and performance issues that keep popping up with either kernel or specific OS update issues and the data loss that follows. Small RAM systems I usually point to be xfs formatted for their filesystems (yes, I use on RHEL/CentOS).

2 Likes

K8S is the full fat Kubernates installation, K3S is a reduced Kubernates install. K3S needs significantly less resources, but does give up some things as well (drops less used extras that can be replaced by add-ons, looses HA support (K3S by default uses Sqlite, but there are ways to get HA back)).

1 Like

yeah i feel it’s a bit like quantum mechanics… if one think one understand data storage, then one is most likely wrong … lol

1 Like

Ok guys, now back to some more pictures!

1 Like

Currently 3 nodes, one on each 3TB disk. Looking to add a node to the 6TB disk and can add another 5 by converting the 3 5.25in drives to a HDD drive enclosure if the ingress commands it.

OS on SSD, AMD quad core, 8gb ram

6 Likes

Yeah…more weird nodes :slight_smile:
For example, both my experimental nodes!

7 Likes

Funny one :smiley:
Is your node storing data on an SSD? :thinking:

@peppoonline Oo, a C4, how are you liking it so far?

2 Likes

That’s an HC4 :slight_smile:
Yeah, it’s a great device, but I am selling it at the moment…it doesn’t fit in my scenario :wink:

2 Likes

Not quite :slight_smile:
The RP4 has a cheap $25 SSD for the OS… I hate SDcards :slight_smile:

And the HC4 has an “old” 4TB SSD for now… with mergerFS I can plug in a big HDD, at any time.

the top one wins for the cyberpunk look :smiley:
that’s all that matters right

3 Likes

Oh, I dunno. If I was a customer entrusting my precious data to Tardigrade that photo would seriously freak me out! :sweat_smile:

2 Likes

I mean that’s part of the beauty of the service, each setup has but one chunk of 80 per segment and only really requires 29 chunks to read it (IIRC).

5 Likes