GE's failing? for no apparent reason

i’ve tinkered a good deal with the pfsense setup… tried passthrough for the nics to the vm’s and even for a while running pfsense on another dedicated host, in the end i went back to the setup i’m running now, where its a vm, because its nice to be able to migrate the router around between servers, so i can shut them down when i need to…

i love and hate my virtual pfsense, but it works pretty good…

ofc when i have a power outage or such, its a bit annoying if there are issues.
have been thinking about getting replication setup on the pfsense vm, so that if one server breaks the other one can take over without issue…

have also thought about getting some dedicated pfsense gear… but it’s been working pretty good after i got familiar with how to run pfsense as a vm.

yeah my special small block vdevs take care of the metadata and small file writes, those help a ton… not sure my setup would run without them…
but it also really causes a ton of wear on the SSD’s.

Storj write IO is kinda insane, but the special small block vdevs are magic.

so much SSD wear in my case tho.

 ioMemory Adapter Controller, Product Number:00D8431, SN:1504G0637
        ioMemory Adapter Controller, PN:00AE988
        Microcode Versions: App:0.0.15.0
        Powerloss protection: protected
        PCI:07:00.0, Slot Number:53
        Vendor:1aed, Device:3002, Sub vendor:1014, Sub device:4d3
        Firmware v8.9.8, rev 20161119 Public
        1600.00 GBytes device size
        Format: v501, 3125000000 sectors of 512 bytes
        PCIe slot available power: 25.00W
        PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
        Internal temperature: 49.22 degC, max 60.54 degC
        Internal voltage: avg 1.01V, max 1.01V
        Aux voltage: avg 1.79V, max 1.81V
        Reserve space status: Healthy; Reserves: 76.36%, warn at 10.00%
        Active media: 97.00%
        Rated PBW: 5.50 PB, 20.59% remaining
        Lifetime data volumes:
           Physical bytes written: 4,367,487,883,457,392
           Physical bytes read   : 3,667,873,249,147,968

will have to replace this SSD in the coming weeks… already got
a Intel DC P4600 3.2TB drive with like 15-30PBw endurance ready to replace it with…
just been waiting for my GE’s to finish.

Storj Migrations are hell… takes forever even on non ZFS setups.
tho using zfs send | zfs recv
runs like 10x or 12x faster than rsync, because it will move the data sequentially.
so when i do migrate nodes i will use that.

nope never had a DQ before, barely even had any real data loss in the 4-5 years i’ve been running storagenodes.

did use sync=disabled on zfs for extra speed at times, which causes some light data loss, but audits have never dipped below 99.97% or so
sync=disable is pretty bad if something goes wrong, like a power outage or server stall or whatever else hell that can happen…

everything have been running smoothly for a long time now… else i wouldn’t have tried to do the GEs

and really not that i care anyways… the held amount is basically nothing…
but GE is a feature that should work correctly.
so now that a node that actually was being logged got hit by the issue i figured i would see if StorjLabs had some insights into why it was going wrong.

because i can’t figure it out.