thx i was just wondering about that…
seems to give me the same weird numbers as netdata… but i seem to remember netdata also writing something about using iostat
my netdata really doesn’t like my pcie ssd, just all zeros or 500 TiB /s transfer speeds and 2000minute latency… you wouldn’t happen to know how the fix that right?
Is the PCI-e SSD shown as /dev/fioa1 or is this a partition and the device is /dev/fioa ? Is it not /dev/nvme0?
Anyway, depends on the PCI-e SSD I guess, maybe it does not update the statistics?
/dev/fioa is a virtual partition the ssd presents to the host system because i formatted the ssd to do that…
i suppose it connects to the system like this /dev/fct0
no clue what that means tho… it was how i initially had to connect to it and attach it to the host system using the utility software that was included with the drivers, tho the drivers are user made because they didn’t exist for debian 10, so maybe somebody made a mistake somewhere…
not sure if this ssd supports nvme it’s from 2016 and the model goes all the way back to 2012,
so i doubt it… lacking these stats is also a small price, i’m sure a solution will rear its head eventually… so not really looking that hard for one anymore… it’s a low priority thing.
fio-status -a
Found 1 VSL driver package:
4.3.7 build 1205 Driver: loaded
Found 1 ioMemory device in this system
Adapter: ioMono (driver 4.3.7)
1600GB Enterprise Value io3 Flash Adapter, Product Number:00D8431, SN:11S00D8431Y050EB58T005
ioMemory Adapter Controller, PN:00AE988
Product UUID:8f616656-45e4-5109-a790-6f766c059382
PCIe Bus voltage: avg 12.17V
PCIe Bus current: avg 0.66A
PCIe Bus power: avg 8.05W
PCIe Power limit threshold: 24.75W
PCIe slot available power: 25.00W
PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
Connected ioMemory modules:
fct0: 07:00.0, Product Number:00D8431, SN:11S00D8431Y050EB58T005
fct0 Attached
ioMemory Adapter Controller, Product Number:00D8431, SN:1504G0637
ioMemory Adapter Controller, PN:00AE988
Microcode Versions: App:0.0.15.0
Powerloss protection: protected
Last Power Monitor Incident: 298962 sec
PCI:07:00.0, Slot Number:53
Vendor:1aed, Device:3002, Sub vendor:1014, Sub device:4d3
Firmware v8.9.8, rev 20161119 Public
1006.00 GBytes device size
Format: v501, 1964843750 sectors of 512 bytes
PCIe slot available power: 25.00W
PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
Internal temperature: 40.36 degC, max 45.77 degC
Internal voltage: avg 1.01V, max 1.01V
Aux voltage: avg 1.79V, max 1.81V
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Active media: 100.00%
Rated PBW: 5.50 PB, 99.99% remaining
Lifetime data volumes:
Physical bytes written: 600,561,386,784
Physical bytes read : 405,484,541,280
RAM usage:
Current: 696,892,160 bytes
Peak : 696,900,480 bytes
Contained Virtual Partitions:
fioa: ID:0, UUID:94d66bf0-2410-43fe-a33b-ef602e135305
fioa State: Online, Type: block device, Device: /dev/fioa
ID:0, UUID:94d66bf0-2410-43fe-a33b-ef602e135305
1006.00 GBytes device size
Format: 1964843750 sectors of 512 bytes
Sectors In Use: 314759610
Max Physical Sectors Allowed: 1964843750
Min Physical Sectors Reserved: 1964843750
I do not know then. It could be that the driver does not provide/update the stats or something. I have only used the newer drives that support nvme and the even older ones that appear as SATA.
worth a shot i just imagine that it’s so fast that it doesn’t get recorded lol
its either zero or near infinity so it must report something… maybe it divides by a near zero number somewhere… it’s suppose to have some of the lowest latency one can get aside from high end optane.