GET_AUDIT file does not exist error

I have this error:
2020-01-19T11:45:22.826Z INFO piecestore download started {"Piece ID": "7H4X4EMLRIZSZH6IAETO46VXJCK5OA54LOBQGKVNSWOMGEMMZRQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"} 2020-01-19T11:45:22.826Z INFO piecestore download failed {"Piece ID": "7H4X4EMLRIZSZH6IAETO46VXJCK5OA54LOBQGKVNSWOMGEMMZRQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Error:87\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:571\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:488\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1074\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
but I cannot see that my node is behaving badly/internet problems/etc… It only happened on some satellites, others are doing good ( I can see downloaded statuses in logs). What can I do, as my node is disq now on all nodes except europe-west one?
P.S. Running on latest version 0.29.3. Server is in Germany

Welcome to the forum @maxifom!

Did you check your dashboard if it shows you disqualified on every satellite ?

Are you on Windows or Linux ? How is your HDD connected?

@nerdatwork It shows disq on 3/4 nodes, I’m running Ubuntu 18.04, dunno how the HDD connected as it’s rented server, but I use RAID 0.

Those are satellites not nodes. Since your node is DQed on 3 satellites it won’t be reinstated so if it was me I would opt for Graceful Exit considering your node meets GE requirements.

You need to find out what has exactly happened for your node to lose pieces which lead to failed audits. It could be many reasons from HDD not being mounted properly or HDD failing or other reasons.

Can you confirm if you are using --mount in your docker run command ?
You can show your docker run command after editing out your personal details like ETH address, email.

@nerdatwork my docker command is :
docker run -d --restart unless-stopped -p 28967:28967 -p 127.0.0.1:14002:14002 -e WALLET=“0xFFFFFFFFFFFFF” -e EMAIL="email@gmail.com" -e ADDRESS=“MY_IP:28967” -e BANDWIDTH=“5000TB” -e STORAGE=“6.2TB” --mount type=bind,source="/root/storagenode",destination=/app/identity --mount type=bind,source="/mnt/storj",destination=/app/config --name storagenode storjlabs/storagenode:beta --network storage-net --network-alias storagenode

@nerdatwork by GE you mean I just stop docker container, and then created new identity and started new node?

GE is short for Graceful Exit. You can click the link above and read all about it. Think it all over before you decide to leave the network with this node. You can start a new node but to avoid making same mistakes you need to figure out what went wrong in this node.

Can you confirm if /mnt/storj is statically mounted in the /etc/fstab ?

I have RAID 0 (~7TB) statically mounted in / and nothing else.
âžś ~ cat /etc/fstab
proc /proc proc defaults 0 0
/dev/md/0 /boot ext3 defaults 0 0
/dev/md/1 / ext4 defaults 0 0
Here is df -h:
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 1.1M 1.6G 1% /run
/dev/md1 7.3T 6.4G 6.9T 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/md0 487M 75M 387M 17% /boot
overlay 7.3T 6.4G 6.9T 1% /var/lib/docker/overlay2/3db3c0c8d73f1a211ee0dda5841ba34144c3ffe1e3908c11729e6dc681dd90ae/merged
overlay 7.3T 6.4G 6.9T 1% /var/lib/docker/overlay2/d5d74def5f956b2af11b091babd2da72cb2a4a923331b6cbde2b4a4515fe24c3/merged
overlay 7.3T 6.4G 6.9T 1% /var/lib/docker/overlay2/fa289419bc31f9dda2b4493b181bc6c57c346400538f04c0dca4df6008048ef2/merged
tmpfs 1.6G 0 1.6G 0% /run/user/0

As a side note, using RAID0 is probably a bad idea. If you have multiple disks, run one storage node on each of them (but wait until one is almost full before starting a new one)

I’m pretty sure graceful exit will fail if you are disqualified and/or have audit errors. To do a graceful exit all data needs to be sent, no room for errors.

1 Like

Agreed. I implied GE to be performed on the last satellite to cut his losses. DQed satellite won’t entertain any request from his node anyway even if its GE.

Maybe I am missing something here, but with the 7TB RAID volume mounted in / , and your docker run command pointing to /mnt/storj it appears that your node does not have proper access to the storage location.