tried to get access to my smart data through my lsi megaraid, and tho it should be possible i kinda gave up, but ill check my smart when i get my second HBA so i can get rid of my raid card.
i dug a bit into the Question of Load / unload cycles first off there are apperently two different places they count that in SMART depending on drive brand and sometimes even models, as in the case of WD.
online i found people talking about their drives having 2.3million to 2.8million load cycles, so if you got 200k in 10 months then a drive could last for 10 years, which seems a perfectly valid life time for a drive… most drives rarely live beyond 10 years of power on time…
sure when i check my own old drives consumer drives they have 5k load cycles in 3-4 years of power on time.
the old drives could damage their heads by going in and out of the parking position, but if that problem is fixed today, then its kinda irrelevant how often a head has gone into parking position, aside from latency related stuff, and again seems odd if it should take much longer for a head to go out from parking so long as the disk is spinning… atleast compared to spinning the drive up.
the issue of load cycles also is an old one, its been discussed online for the better part of 10 years, if it was a real issue, i’m sure wd would have made changes to fix it by now…
so sure maybe its a thing, doesn’t seem crazy relevant, an artifact of how older disks was to maintained for long life…
personally i don’t ever spin down my drives in my server, since i’ve been advised against it, because that is one of the likely failure points… which also kinda makes sense with my understanding of electrical engineering, since an electric motor is stressed and draws the most power during start up… and can burn out its coils if it cannot spin up, ofc depending on a lot of factors.
if you want to get a good evaluation of lifetime of drives before you buy them then i can recommend the annual blackblaze disk reports, they are quite good as a yard stick for whats good and whats bad.
i wouldn’t mind there being an easy answer for long disk life, but there rarely is an easy answer for anything…
and the more one dig into it, the more the whole question and answer dissolve into other things that might be even more relevant… such as how many heavy trucks or tractors drive close by, creating vibration while the drive is running… personally one of the thing i’ve found that kills most drives for me… is that i often store drives with the print board up on tables and such… for some reason they tend to die from that at a steady state…
i should really start trying to turn them print side down… the idea was kinda to protect the print from being scratched…
you want to know one thing that kills drive… high usage… i’ve seen it often and i believe its partially why so many people need or want to run raid 6 or better, because that when you ask for max performance from an old drive… it much more often goes terribly wrong… ofc it doesn’t help that old drives in new computers mean they will need to run at max output to keep up… been thinking about putting a limit on my drives max output, because i would rather have them live longer than output more mb/s… i mean i’m running small arrays for now… but really how often do i really need 500-750MB/s from my 5 drive array, seems its just putting needless stress on them.