didn’t know i could write % to define the disk size…
not like thats defined anywhere… it says start and end while writing sector and logical information in bytes total size in TB
can i write start and end in factions also, how about [ insert obscure mathematical knowledge ] and if you set it to start at 1 or 0% i mean one can start at 0% but one cannot start on sector 0… unless if they count from zero which is kinda confusing sometimes, because it should really be consistent.
but i suppose the common folk doesn’t need to be confused by that, but may they should… zero is a pretty important invention.
i like how it suggests ext2, really… isn’t that like antique, why would anyone want to use that by choice…
it’s like handing out clay tablets on the library and a cuniform script stylus… nothing wrong with that…
well i was forced to try and use them…and mainly i just wanted to get onward so i could pass on the tasks of setup on the zfs… made me feel like i had to dress up like a pirate and play pin the tail on the donkey to start my tesla.
so in your little example here… which one is the correct… because space allocation in fdisk is vastly different from allocation in parted… i mean you start at 1049k in the parted on and at 2048
i suppose fdisk is right because it gives you the default setting ot use.
so parted actually allocates it outside the default… not like that could go wrong… also isn’t it wise to allow for free space at the end of the drive to allow slight expansion of the disk partition in case it develops bad sectors… i mean sure one allocates some sectors for the life of the disk… but if its filled completely, and then develops bad sectors… can it then actually try to move them to good sectors… i think not…
they all had WNN which i think is their UUID but not totally sure.
great well this looks promising, goddamn read errors on the drives i literally just put in… hopefully it is nothing serious… lol i really should get some sort of limiter so that my disk cannot run full tilt on transfer speeds… hopefully i can find a way to make zfs do that… like say max 80mb written or read at any one time to a drive… or something like that…
kinda want to see if that makes it better… ofc these drives had never been tested before i just threw them into the server… and they where used… and mis matched and now they have been running on pretty much full tilt for days.
also i hotswapped them…might need to stop doing that… doesn’t really seem like it’s a good idea… either for the other drives or the drive i pull … ofc i don’t bother setting them offline either i just pull the drive lol
deal with it lol atleast i get my redundancy tested…
do you know what the cksum error mean and why is it on the replacing-3 when none of the drives has it… and why isn’t the one with the read error the one with the checksum problems.
looks a bit weird to me, but i was running without redundancy for just a brief 10 hours… i knew shouldn’t had done that… but i decided to pull 2 drives instead of one when i was replacing drives in the system… seemed to go fine until i pulled the wrong one…
pool: zPool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Apr 27 03:34:49 2020
3.26T scanned at 188M/s, 2.83T issued at 164M/s, 10.1T total
584G resilvered, 28.12% done, 0 days 12:52:17 to go
config:
NAME STATE READ WRITE CKSUM
zPool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
wwn-0x5000cca2556e97a8 ONLINE 0 0 5
wwn-0x5000cca2556d51f4 ONLINE 7 0 0 (resilvering)
sdf ONLINE 0 0 5
replacing-3 ONLINE 0 0 5
sdi ONLINE 0 0 0
ata-HGST_HUS726060ALA640_AR11021EH2JDXB ONLINE 0 0 0 (resilvering)
wwn-0x5000cca232cedb71 ONLINE 0 0 0
logs
sdd4 ONLINE 0 0 0
cache
sdd5 ONLINE 0 0 0
i mean it should have been fine, the pool was online… and i pulled a replaced drive…
then i pulled a good drive, put it back resilvered it tried to pull another one, again without any luck… was looking at the disk tray led’s seemed like zfs was trying to tell me something… but i guess it wasn’t
anyways so i just threw a new drive into the old bay and took the rest with me…
which meant the pool was degraded… but essentially it should still have all the data…
then i set it to replace the first drive…
did got much slower now that it didn’t have all the drives… which was kinda expected, but wanted to see how it was for myself and my storagenode wasn’t to happy about it either… but that wasn’t to bad yet…
then i started the 2nd resilvering because i wanted to see if it would got faster…if i ran two of them… the logic being that it seemed like the speed was limited by the disk write… and this may have been the case if i had been at full drives… and maybe have had 10 drives or so instead of 5 -1 and resilvering two…replacement drives
basically making it 4 vs 2 … and ofc writes will be slower than reads…but then on top comes all the parity calculations since i removed my redundant drive…
so yeah maybe i cause this, but i would rather crash it sooner than later… right now its just a tiny 7 week old storagenode that’s on it… so…
damn my storagenode tho… i’ve had a 50-60% drop in ingress since i started the double resilver… and already had dropped like 20% … i should try and enable asynchronous writes… did that once already during testing, but got some weird results…