the slog will also be used if there is a failed checksum.
The 5 sec is only the max, which granted my setup will use all the time because i run sync always to limit hdd random io and pool fragmentation.
when i do an iostat -w 3600 i will rarely have high write wait times… granted there will always be some of those minutes long waits, if i’m like doing a scrub or such… but the vast majority of the write waits will be into the ms range… tho i’m sure that will count the slog… not sure how to check it without seeing the slog writes.
ofc my new slog / l2arc pcie device haven’t been active all that long, so my numbers are still a bit unknown, but it’s a vast improvement from the two older sata ssd’s i had trying to do slog… one of them would end up with latency of 125ms… now it looks to be sub ms range 24/7
zfs is very advanced, i cannot say exactly how it all works… i doubt neither of us truly understand it, ofc that doesn’t mean you aren’t right… which sounds reasonable, but i doubt it can really be simplified into 3 words
but maybe
end to end checksums would involve them being checked and verified each step of the way, even through RAM, ofc the real issue would be with that is that if it can detect it and the data only is in ram… then it has no way to fix it, it can only report it…
and some writes can at times wait for minutes before being written from memory to the pool, and thus in such cases of large queries then the chances of bit errors in memory would go up exponentially, which is why i like it to go directly to my slog always, and then if the system discovers an checksum issue at one point it will be able to go back and find a backup essentially… even if it’s only like a 5sec period…
its still essentially a backup of the data.
just like the l2arc will limit writes and reads on the pool and thus reduce latency and disk wear, and additionally if there ever is issues with the pool, in most cases lots of the stuff it needs will be in the l2arc, even if it takes weeks or months to fully warm it up…
and again if some data is discovered to be corrupt, even if the hdd redundancy cannot fix it… then in some cases it could well be sitting around in the l2arc.
and ofc it doesn’t hurt that it makes everything faster… 
i would be very interested to know if ram still is the primary failure point for zfs… because thats an easily mitigated issue… and if that is the case i might consider that…