Hi, i have this flag activated for all my nodes. I have 30 nodes but i don’t have nothing error or slowly filewalker. I have activate the filewalker with high priority. I have nodes windows on VM with 12 core e 48gb ram. I have 12 nodes on this machine.
@daki82 you have experience that if this flag is off, the filewalker run more speedly?
Unlikely. This indexing thing took less iops than all believes.
As far as I can see, almost any Windows VM is affected, especially if the Windows VM is running not on Hyper-V but on some Linux distro. The bare metal or a Linux VM with ext4 likely would not have an issue.
the vm is on vmware esxi. the disk to the VM was connected by creating a raid 0 in the raid card and then created a datastore of the entire disk on ESXI and then passed completely to the VM.
1 disk failure and data is gone. But I think you know this.
As expected it’s not a Hyper-V VM, where they have a relatively good integration.
You may try this method to improve speed of the disk subsystem:
The final goal - is to have all filewalkers successfully finish their work:
Thanks @Alexey ,I certainly know the problems of raid 0. But to use the storj policy or more importantly the advice to use one disk per node and not use raid 1 or raid 5 or 6, I did it this way. However, I have not had a problem for over 3.5 years and the disks complete the filewalkers completely and I have never had differences in used space or strange blocks apart from once on a node but there it was my mistake. I currently have the multinode dashboard which shows me that I have 120tb occupied and the monthly average shows me 116.5 but this month I had problems with scheduled electricity downtime due to maintenance of the supplier’s electrical substations and therefore I have an online audit at 98% which negatively affects the average. The rest is all ok. Anyway thanks for the advice and the notification, I learned something else I didn’t know.
This is not a policy, we recommended to do not use any RAID at all (any level), unless it’s already exist.
But yes, it should be one node per disk, see How to add an additional drive? - Storj Docs.
Yes, the problem become noticeable when your disk is close to a full capacity, including the how NTFS works and that it requires a periodic defragmentation, especially after such usage produced by our customers.
The virtualization makes situation only worse, especially when you use Windows VM on the Linux host. It’s better to use docker in this case - much less wasted resources and IOPSes.
version 1.96.6
i did take a fast look but was thinking its another problem because i miss 2.3TB not some GB only (i would not care) but this much feels bad
RPI 2
9TB external HDD
ext4 (like all my nodes but only this one scam my free space)
makes me a bit angry this bug, cant be that hard for this piece of software to check what files it think are used (the 6.66TB ) and compre to what is really on the filesystem… in other words please fix this annoying bug asap
will do, there is for sure something going on
/dev/sda1 9.1T 9.0T 32G 100% /store
it decrease from 33G to 32G free space
have to go sleep soon maybe ill restart the node with storage set to 6TB that it will not try to fill the HDD more than can fit
You likely have a “context canceled” errors with filewalkers or “FATAL” errors and restarts. These read timeouts from the disk may also affect your audit score, so it’s better to figure out why your disk is so slow to respond.
It could be a bad USB connection/cable, not enough power, or you do not have an external power supply for your disk or it’s not enough, or the USB controller in the HDD enclosure is simple bad or malfunctioning or overheated.