Thats all seems cool, but as You can see in discrepancy topic, ppl don’t understand or aren’t able to follow those complexity.
For me as a SNO, to debug i just need to know, when was the last time fileWalker finished its work on each sattelite.
What’s the simplest (thus fastest) way to find out when was the last time filewalker was able to finish that scan on startup?
If i had that in dashboard, and could look, like: “aha, on us1. it finished full scan 78h ago, on eu1. 24h ago, on ap1. 101h ago” then i would know theres noting to worry about, because:
-
- deletes could occur any moment for e.g. 1-2TB, if customer wishes,
-
- and scan can take days to catch up with the reality.
-
- and node can gain 1-2TB ingress any moment if customer wishes, and again see 2)
Most important things is to know, how much the scan took on each satellite, and when it last time finished. Other things are just secondary. If my filewalker aren’t able to finish, or it takes 6-9 days for one satellite, then i need to know this in first place, in steps as easy like: opening a node dashboard, and looking up. THEN i can think about what to do, to fix that, looking into complex debugging like those here Elek …
more ...
, or just avoid all that, and make immediate changes to the hardware, that i know are 99% the reason, (like the HDD sATA controler can be too shitty(customers grade 1x PCI chinesium inventions with 4-8HDDs connected simultaneously - time for a real conntroler like H310 or LSI 92xx one, or HDD is some old fart and just time to replace for new.)
Looking into those complex debugging is like last thing anyone want. Maybe if You would pay me $2,5/TB stored, not $1,5 like now hah.