ZFS performance, ARC vs L2ARC vs special vdev

First point, these performance regressions are mostly gone according to LawrenceSystems.
Second point: This is exactly why I think this test gives some insight.

I am a big believer in “only use unused resources”.

I personally would not use TrueNAS Scale with STORJ on top of it for multiple reasons.
For one, I want STORJ to run on a different VLAN, ratelimiter, with the DB on a SSDs which I don’t have on bigtank machines, the list goes on.

So for me personally, running some STORJ Docker that access a dataset via NFS on a datatank is more realistic than TrueNAS SCALE with STORJ.

And even if NFS did add some latency, the difference between the results is still massive.

Yes.
It did even less, 15GB “only”.
Dataset was set to metadata only.

Why?

That is exactly how I did it :grinning:

But why? I would get it if there were some RAIDZ stuff involved, but this is a single drive.
There is no pool geometry or padding or anything like that.
I highly doubt that without LZO, the dataset would be 25TB. Because that should be the case right, if the 5TB has a compression ratio of 5x?

I am not worried about the 15GB on the SSD :slight_smile:
I am worried about how much ARC it uses.
Some people used to recommend setting L2ARC not higher than 8x the amount of RAM because of this.

Not sure if I would agree 100% here.
I would say it this way, both methods can save all metadata.
And that is a benefit for a HDD.

True, but also for L2ARC.
Again, I would not bet on L2ARC being reasonably scalable for STORJ, but it did also perfectly fine.

There seems to be a small misunderstanding on your part:
L2ARC does not sped up the process, because it cached STORJ blobs.
It cached all metadata.

My theory is, that because L2ARC is basically just evicted ARC, the L2ARC drive was filled up during the replication task. Metadata has either evicted ARC from pool1 or evicted ARC from the writes to pool2. That is why the first run did not offer worse results than the second run. L2ARC was already hot.