Why so many failed Audits?

i dont know way so many audits faild…?
System Rancher OS + docker 6core 11GB Ram
IPS 100/50

i set “storage2.max-concurrent-requests: 4” because i was DQ from satt 118 bevore (Ticket 1203)

can Someone help me?

========== AUDIT =============
Successful:           7
Recoverable failed:   52
Unrecoverable failed: 116
Success Rate Min:     4.000%
Success Rate Max:     5.691%
========== DOWNLOAD ==========
Successful:           161
Failed:               680
Success Rate:         19.144%
========== UPLOAD ============
Successful:           1808
Rejected:             89578
Failed:               3003
Acceptance Rate:      1.978%
Success Rate:         37.580%
========== REPAIR DOWNLOAD ===
Successful:           0
Failed:               0
Success Rate:         0.000%
========== REPAIR UPLOAD =====
Successful:           0
Failed:               0
Success Rate:         0.000%

Start command:
docker run -d --restart unless-stopped -p 28967:28967
-e WALLET=“0x120B3d8BC249a2c82D890c228b7cFd5acfC423dA”
-e EMAIL="uwe.88@t-online.de"
-e ADDRESS=“cgwhs.ddnss.de:28967
-e BANDWIDTH=“10TB”
-e STORAGE=“7TB”
–mount type=bind,source="/mnt/StorJ/v3/identity/storagenode",destination=/app/identity
–mount type=bind,source="/mnt/StorJ/v3/data",destination=/app/config
–name storagenode storjlabs/storagenode:alpha

Dashbord:
Storage Node Dashboard ( Node Version: v0.15.3 )

======================

ID           178iQ8d2iAK2KL8F1RC4mQ4GrRUPvN3zYq8VWpZ6VHAMCjD9Wx
Last Contact 1s ago
Uptime       84h58m29s

                   Available         Used      Egress      Ingress
     Bandwidth        9.4 TB     598.7 GB     34.4 GB     564.4 GB (since Jul 1)
          Disk        7.0 TB       9.5 GB

Bootstrap bootstrap.storj.io:8888
Internal  127.0.0.1:7778
External  cgwhs.ddnss.de:28967

Neighborhood Size 150

Log: (i cant put the log direckt. about 30MB Loglevel:Debug)
https://nextcloud.cgwhs.ddnss.de:443/s/s4HfLdZa3L8Rcdm

Why are you use Rancher? are you on freenas?

was…
but now
DL380p > ESXi > rancher

the Storj Fils alre on a
ESXI > Frenas-Server (NFS Share)

i will change in the future to debian10

Please not use NFS for for storj, NFS have big issues with sqlite DB.
Switch to iSCSI ASAP.

Could you please tell more about your freenas configuration? (how it connected to ESXi? what a hardware configuration? what a disk configuration?)

Looks like you have a lot of lost pieces again.
Your setup is not optimal.
Please, make sure, that the /mnt/StorJ/v3/data is statically mounted in the /etc/fstab
Also, as @Odmin mentioned, the network storage is not supported setup, but could work.
In your case it’s seems not work. Better to connect a local storage or at least using iSCSI.

2 Likes

then stop the node and change all then restart
(some days)
or let it run until change?

Until change what? You already lost data.
Unrecoverable lost data. Your node can’t found a requested piece.
Who deleted it?

Setup is:
1 Server ESXi (12 CPUs x Intel® Xeon® CPU E5-2630 0 @ 2.30GHz)
1 SSD (VM’s)
5 HHD (via HBA to Freenas)

VM’s:
Freenas (NFS, SMB, FTP)
RancherOS - Docker(storj) connect to freenas (NFS)

i dont know who delt it… i dont delt enithink by myself.
i think it is then because the setup is not realy good.
should i change the setup and run a new node or change the setup and go on with this one?

Please, copy the output of the command

grep '/mnt/StorJ' /etc/fstab

Can you connect your HHD to the VM with docker without FreeNAS between?

Another rigth way, you can setup VM on Freenas (Ubuntu server LTS, debian have issues with bootlader on FreeNas) with docker and create a ZVOL and pass it for VM like a disk. It also will work very stable. (Please not use Rancher on freenas, VM is mutch better)

This is a more bad way than now.
The FreeNAS is a VM too

Oh… I see, it very bad idea virtualize Freenas

@uwe.88
Please forget my recommendation with VM on freenas, better way use RAW Disk Mapping (RDM) and pass you physical disks to VM with storj. Here is a guide.

[rancher@docker ~]$ grep ‘/mnt/StorJ’ /etc/fstab
grep: /etc/fstab: No such file or directory

Mounts are automatic load after startup and the VM is waiting 3min after Freenas is start
i think you will see this:
#cloud-config

# /var/lib/rancher/conf/cloud-config.d/nfs.yml

# https://github.com/rancher/os/issues/641
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    content: |
      #!/bin/bash
      [ ! -e /usr/bin/docker ] && ln -s /usr/bin/docker.dist /usr/bin/docker

rancher:
  services:
    nfs:
      image: walkerk1980/rancher-nfs-client
      labels:
        io.rancher.os.after: console, preload-user-images
        io.rancher.os.scope: system
      net: host
      privileged: true
      restart: always
      volumes:
      - /usr/bin/iptables:/sbin/iptables:ro
      - /mnt/RaidZ:/mnt/RaidZ:shared
      - /mnt/Test:/mnt/Test:shared
      - /mnt/StorJ:/mnt/StorJ:shared
      - /mnt/Backup:/mnt/Backup:shared
      environment:
        SERVER: 192.168.188.53
        SHARE: /mnt/RaidZ
        FSTYPE: nfs4
        MOUNTPOINT: /mnt/RaidZ

mounts:
- ["192.168.188.53:/mnt/Test", "/mnt/Test", "nfs4", ""]
- ["192.168.188.53:/mnt/StorJ", "/mnt/StorJ", "nfs4", ""]
- ["192.168.188.53:/mnt/Backup", "/mnt/Backup", "nfs4", ""]

it is posible. I think i start at 0 and make it so.

As far as I can see - if your storage is a network connected drive, especially NFS, it will fail not only uploads and downloads, but audits too.
This is because network connected drives have a big latency and doesn’t fast enough. Moreover sqlite has problems with working on NFS drives.
See

big thanks for all suport!
now i set up a new system(VM) with direct hdd

2 Likes