Cannot migrate my node

you should set it to zle instead of lz4 for zfs compression
you basically get the same benefits, just non of the overhead, even if it’s not much it is still there.

Why ? ZLE only removes continuous runs of zeroes as explains here : zfs:compression [XigmaNAS]

indeed it does, the storagenode data is encrypted and thus cannot be compressed…
when running lz4 zfs will continually attempt to compress files it stores, which gives you more compute and resource overhead.

1 Like

Here is the result of

df -HT


So… Apparently I’m right, it’s my ZFS which is used… Why does the EXT4 won’t be removed…

Well okay :slight_smile: I’ll wait and see :slight_smile: my compress ratio is now 1.04x :yum:

The ext4 volume is used by your root. If you would remove it somehow, your OS will gone

will end around 1.05x and you will see the exact same for zle… so
you can by changing the recordsizes make the compression seem to improve, but it’s just be ratio of empty space to block size / recordsize changing.

it’s pretty simple… the storagenode data is 99.9% blobs and blobs are encrypted customer or test data, encryptions goal is to make data seem random.

compression is about finding patterns to create structures that can be expanded into something larger.

basically you got two polar opposites, for any non encrypted data you are right and there will be a saving which is why zfs runs lz4 by default, because its almost “always” better with it.

on this data its just a waste of compute to have zfs continually trying to compress each uploaded file, doesn’t take much cpu… but it would still take some.

tested this all a year or so ago, but it can be logically deduced.

Haha no… It’s a third one. The EXT4 volume/drive that I want to remove is not listed by “df” command because it is (should be) unused :slight_smile:

Ok I’ll have a look once my drive problem will be resolved x’)

Hello, any idea about that ? :slight_smile:

Solved by mounting temporarily the ZFS partition on another mountpoint and then removing the previous EXT4. Thanks for your help ! :slight_smile:

you can unmount zfs datasets, which can be rather confusing, when one first runs into it.
there is a zfs automount which in most cases fixes mount issues.

else there is like zfs import poolname
or something like that, also pretty easy to use when the host doesn’t create folders inside the mountpoint.

1 Like

Hello, now it’s been almost a month that my node is running and my ratio is 1,08x since a few weeks :

image

I haven’t tried ZLE but I will :slight_smile:

neither will give you any different free space… zle will just waste less compute resources and memory.

lz4 will give you a slight bit more space ofc, but looking at the blobs folders which are 99.9% + of the capacity then lz4 won’t save any meaningful space.

if you change the recordsizes you can make the compressratio change…
because the compress ratio is related to how the pieces of data fit in the recordsizes and the empty space in them is what is counted as compression.

tested it a lot when i first started…

and it also makes sense when one understands storj data is encrypted, and encryption tries to make data look random, while compression looks for structure in data to save space…

Hello ! I want to trust you but… I’m now 1,21x :sweat_smile: :rofl:

image

you clearly don’t understand the numbers you are seeing…
the compression ratio shows how much space would have been required if writing full records.
when compression is enabled only the required sectors on the drive will be used + overhead.

if you change the recordsize you will see different compression ratios, i’ve literally copied 7TB nodes on different recordsizes to test it.

the best way to really gauge any advantage you might have is to do a
zfs get written storj/temp

and then compare that to your node dashboard.
the node dashboard will show you have more than what it says you have written to disk,
you are chasing shadows…

also keep in mind that the written to disk doesn’t include your disk overhead, which is the space used for the partitioning usually about 7%, but it will depend on your sector sizes.

also keep in mind that not all filesystems work like zfs and thus storage on ntfs or ext4 will not compress the part of their blocks to use less sectors.

so basically in that case it filled the remaining with zeros, which is why ZLE is called Zero Length Encoding.

but yeah i chased those shadows to for like half a year…
zfs compression can be very useful, but you cannot compress pieces and thats a fact.

what you are seeing is the empty space in the records(zfs name for blocks) being saved
which depends on your recordsize and the storj file sizes.

won’t listen to me, well then maybe somebody else can explain it better.
@kevink

2 Likes

The compressratio is just showing how much space your files would end up using if there was no dynamic recordsize. Like a 9KB file would take 128K of space (recordsize 128k). That’s how the compressratio calculates. There’s no real space saving.
(Maybe there is compared to other filesystems but idk about that. There’s no real compression compared to the real filesize)

1 Like

Hello,
Thanks for your explanation, I just changed it to ZLE :wink:
Thanks

1 Like

yeah ZLE really is the only way to go, atleast for storing encrypted data for others.
LZ4 is pretty great for everything else, and then i use PZIP-9 i think it is for log file storage in their own data set with a 1M recordsize

will ofc work better with log rotate and it’s compression, but that also requires you to decompress the entire file to access parts of it… while zfs gets less efficient compression since each record is what is compressed, but then it can also retrieve data in rather small packets which seems great for scanning through logs… atleast thus far…
only been testing it for a few months.