the read io i usually get on zfs storj pools is like 1/8 of the writes and then when we consider that writes are a worse workload for HDD’s about double… then reads are like 1/16th of the full workload when running steady state with plenty of RAM for ZFS ARC.
i’ve tried many different recordsize’s over the years, but eventually settled on 64K due to its improved performance when dealing with fragmentation and slightly better cache / RAM utilization compared to the higher end recordsizes… ran 256K for about 2 years and 512K for 6months before that.
however larger recordsizes does improve migration speeds, but i usually use
zfs send | zfs recv
for that, which makes the transfer sequential, helping a lot with time a migration takes.
and in which case the recordsize becomes basically a mute point.
one also has to keep in mind that ZFS has dynamic recordsizes, basically the recordsize is the max size.
while the ashift (zfs pool physical sector size) defines the minimum size.
one doesn’t run ZFS for the performance, one runs it for the reliability…
however some workloads run great on ZFS while others need special gear and devices to improve zfs performance… l2ARC helps a little but not a lot and in some cases just slows the system down, since it requires RAM to keep track of the L2ARC.
the special metadata device is amazing for small files and metadata on ZFS but it will also tear through SSD life at a speed that is face melting in most cases.
best thing you can do for ZFS is have a lot of RAM the it can usually handle just about any read task you can throw at it.
on a side note… have you remembered to configure your ZFS pool optimally for high io
with things such as
zfs set atime=off poolname
zfs set xattr=off poolname
zfs set logbias=throughput poolname
on top of those as the bare minimum i would also recommend
minimum running ZSTD-2 if not ZSTD-3 compression, go with ZSTD-2 if your cpu is questionable.
you may be able to go higher but for storj i don’t think the CPU costs are worth it as it only compresses the databases and such… but there is still an improvement from that.
even tho one would think that the ZLE compression would makes most sense, but more compression means better cache, RAM and storage write and read performance.
before ZSTD one would use LZ4 compression… but after ZSTD LZ4 is an inferior choice.
at worst they are about even and in some cases ZSTD gets double the compression for the same cpu time.
but do keep in mind that ZSTD-9 will most likely choke your CPU no matter how powerful it is…
so keep it to 2 or 3 and storj data is encrypted so won’t compress.
only the db’s and such will benefit, but it does help…
default ZFS recordsize is 128K, which is the default for a reason.
i’m sure i’m forgetting some stuff… but maybe ill remember later 
hope this helps.