The way I understand it, @s-t-o-r-j-user stated a hypothesis, not a fact. Note how they say:
cant say much about its influence on
storagenode
operations
I assume at this point you believe everyone who states a hypothesis also has to test it?
The way I understand it, @s-t-o-r-j-user stated a hypothesis, not a fact. Note how they say:
cant say much about its influence on
storagenode
operations
I assume at this point you believe everyone who states a hypothesis also has to test it?
That is where we differ. I think it is bad form to just bombard others with papers or a 1h Youtube link. Donât get me wrong, I very much like sources, but I my opinion that does not prevent you from writing your own thoughts in one or two sentences.
Not gonna do that. You donât have a saying in that.
Yeah, at first. Then I said âMy guess none (influence)â to which he replayed with âyou are probably very wrongâ
I made a statement regarding my hypothesis after conducting preliminary tests with RDP protocols, which are a mix of TCP, UDP, and QUIC respectively, along with several SSH streams. A quick look at my dashboard potentially implied that everything was becoming much cozier again. :- ) EDIT: Of course also storj load was present on the machines along a few other things.
I consider you a friend of mine so I had to think for a moment, and after this moment, I hope you do not mind me asking, whatâs the problem @Toyoo?
Just that Iâm rather allergic to confident technological statements that are not supported by measurements. Itâs really easy enough to add a âMy guess isâŚâ or âI believeâŚâ to a statement to make it clear itâs just an opinion, and not absolute truth.
Thatâs one way of doing it, out of the many correct ones. And if I am recalling this whole conversation correctly, one should not have any reservations to me in this regard. I am not very enthusiastic to provide completely unnecessary explanations. Anyway, all this conversation unfortunately really got off-topic.
Technically weâre competitors trying to get as much Storj data as possible
TBH, I am mostly for some IT fun so kindly please do speak for yourself. :- )
Back to topic, here is how I plan on moving forward.
To speed up the transfer, I will stop the node. That should not get me disqualified.
I have a group of 4 and two drives that are identical. I will use single drives for pools.
pool1 just a plain disc, ARC only. This is where I will copy the data to originally with rsync.
pool2 has special vdev metadata.
pool3 has reboot persistent L2ARC.
pool4 is ext4. Not really a pool
All ZFS are with record Size 1MiB, Sync disabled, Atime off, lz4.
ext4 defaults and atime off? Any suggestions?
Database is always on the node SSD.
Steps:
if this is a VDEV, itâs a pool, but stacked: ext4 above ZFS
None of this is a vdev!
These will all be datasets. I donât beliefe vdev to be useful for any kind of data expect VMs.
The fixed volblocksize has to many downsides. No, these are all datasets. Except âpool4â that is just a ext4 drive.
Current status:
- copy data to pool2 with replication
Done! Also was a nice refresher on how TrueNAS can create keypairs in the webGUI but because you have to store the keys in the home directory of the user, this is pretty pointless and you are better of by doing it in the shell to begin with, which seems like a strange way to me but ok fine
- reboot
- run test twice on pool2.
Skipped these two steps to do this first
- copy data to pool1
That way I hope to have fewer differences between the two datasets.
Ingress is stopped by limiting the 5TB node to 1TB, but maybe there are some deletions that are will screw the results for pool3 and 4 a little bit. This ainât a scientific paper
Vanilla. :- )
[20 âŚ]
?
Anyway, first run went through.
So for pool2 with special vdev, first run with cold ARC took from 07:28:49 to 08:56:00.
Or in other words 1h and 27min for 5,5TB.
Second run is currently running. Look like after 1min, read speed falls from 50MB/s back to 20MB/s. 20MB/s is exactly the speed of the first run. So I expect that to be ARC.
Why is there no disk busy anymore in TrueNAS Scale? Really liked that in TrueNAS Core.
I dont know, I joined this topic mostly because of interest in Solaris ZFS; besides tbh, its hard to track the methodology and your developments; whats the summary if I may ask?
Iâd like to add that I just simply cant wait for the results. :- )
Still donât understand what Vanilla means in this context.
But to answer your question, it is still a work in progress.
Second run just finished in 1h 26min. So warming up ARC did not really help any further.
Next up is pool1 without special vdev.
Still donât understand what Vanilla means in this context.
In the context of networking or in the context of storage or both?
Or maybe there is any other context you have on your mind?