Node Disqualified on freenas VM

Hi, everybody,

my node storj was disqualified by some satellites within hours and I’d like to understand why. I’m using linux docker on debian 10 in a VM on my Freenas. Maybe the disqualification is due to emulation? The VM has access to 1 core 1 gb of ECC ram and two disks (one for debian OS and 1 for the storj node). My connection should be within the minimum requirements: 40 mbps download and 15 mbps upload.
The VM’s local time is the same as my country’s.
I also tested door “28967” and it is open.


To execute docker I did the command:
sudo docker run -d --restart unless-stopped --stop-timeout 300 -p 28967:28967 -p 127.0.0.1:14002:14002 -e WALLET=“xxxxxxx” -e EMAIL=“xxxxx@xxxxxx” -e ADDRESS=“xxxxx” -e STORAGE=“510GB” --mount type=bind,source="/home/debian/.local/share/storj/identity/storagenode",destination=/app/identity --mount type=bind,source="/mnt/storj",destination=/app/config --name storagenode storjlabs/storagenode:beta

I leave attached a screenshot of the dashboard together with the result of the command “docker logs --tail 20 storagenode”:

2020-06-17T09:42:07.449Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “Z53LOJEQVY6VXM7MZX4VSIBHHVFG4QUDEVS5XMFFBMRBD5C3HV5Q”}
2020-06-17T09:42:09.005Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “QZAR4LGTL4VB42LASSYT7XRBJLLKZSYGSSITAIJMH4YFKUA2GB2A”}
2020-06-17T09:42:10.425Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “HL227O3ZO4XGIV3Z3EZMDQTTNXAGWXXIUGRHTVC4ZLGVRQX2KMDQ”}
2020-06-17T09:42:48.467Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “OBWJOYO5TWJHWQIOZAQB7JSDBDMDRCGU2XQZDWK32CC3PFUT4SNQ”}
2020-06-17T09:43:18.649Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “IZYCPS4U26KBYSUN52BZLHMTVFCIJQ6XEHEPEI7R7GW5PI6SOTTA”}
2020-06-17T09:43:36.603Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “Z6SYCFTAIV6FXP43PSU24HCVAT7Z5TCIYZQ3RJB55NVIDNGGVDOA”}
2020-06-17T09:43:41.555Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “3B77ISOU24WVXWRXL7N34JXM4A6ZZA32SQSD7ETXRHPXD6AJABZA”}
2020-06-17T09:43:46.016Z INFO piecestore upload started {“Piece ID”: “DWUTMWBU4OJB3WVHC5UAUJTTRYIR5CVAIKX4IPR7OMDWU5WERD7Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 504114529536}
2020-06-17T09:43:48.822Z INFO piecestore upload started {“Piece ID”: “K5373E4R6V32GRKQE57B335N3XZ3A3GQJNX7AA2MH4BYSEIC5V4A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 504114529536}
2020-06-17T09:43:49.250Z INFO piecestore uploaded {“Piece ID”: “DWUTMWBU4OJB3WVHC5UAUJTTRYIR5CVAIKX4IPR7OMDWU5WERD7Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-17T09:43:56.300Z INFO piecestore uploaded {“Piece ID”: “K5373E4R6V32GRKQE57B335N3XZ3A3GQJNX7AA2MH4BYSEIC5V4A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-17T09:44:43.220Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “2SPOUOIQ77VDOQ5QN2L2VCJ33PYJGVQDZ5BI7FKPK6TFUO3T6WPA”}
2020-06-17T09:44:44.015Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “WABGULKKBYR4ANM55ZAZLH3UWQCHLCOJNX2BZXBSXEECLAJYKNTA”}
2020-06-17T09:44:58.027Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “3L6LNVEKDOYZXSGYBU3DSLLHQOEFFNB5REKXZ4I4M24AV34ZHUUA”}
2020-06-17T09:45:30.655Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “RRMGBXK7HI2O5OKAYQPFL7R2NIMPYK54AJEWIUX3WWEAKRFAN4IA”}
2020-06-17T09:45:33.050Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “SA4RLJ6LN5CHR72Q4QO6OHYDHYNLPOKZDR3EJ2K2N5BLCSXGEX3Q”}
2020-06-17T09:46:14.946Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “XSKTMDJCGGQGXIHAB3GPCUBGK5MVXAR6LKOQWF37K5BEXCDBHFMQ”}
2020-06-17T09:46:39.162Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “4PRCCQLD7Y36LKI6LUDZ5IAFOJIKXBHXXDIZIYL3OFT4RLQ5SH6A”}
2020-06-17T09:47:26.662Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “VSPWO37BYSGXXHXSYEUYBXPCEKQTXS2Z2YDCWFSOQ4D4BLNDV5LA”}
2020-06-17T09:47:39.042Z INFO piecedeleter deleted {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “5QAW2YZ2HS63VEOYTZWRLPDKNTLDC76LW7LMP3EGPDOOOQ5R746Q”}

Hey @luky99, welcome to the forums. Sorry to hear it’s under such unfortunate circumstances.

The only reason you can get disqualified right now is by failing audits, because data is either deleted, not accessible to the node or corrupt. Other errors will not lead to disqualification.

In order to find out what went wrong you need to look for lines in the log with both “failed” and “GET_AUDIT” in them.

docker logs storagenode 2>&1 | grep GET_AUDIT | grep failed

Since you’re using docker logs, please don’t remove the container before checking this as that will also remove the logs.

Thank you for the answer @BrightSilence. There are some errors in the docker log:

2020-06-16T11:02:48.378Z ERROR piecestore download failed {“Piece ID”: “M7G6WXEZGWI5WOKDZXFO3QMUJRPUCFGDIQQ55J7A25CG2T2Z2PFQ”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_AUDIT”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:502\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-06-16T13:29:24.115Z ERROR piecestore download failed {“Piece ID”: “QQJ4H3ROUOLOSRHEMQO5XRGLW4K3GZGT5UMBDEZVC234BGRQEEMA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_AUDIT”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:502\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-06-17T08:59:53.209Z ERROR piecestore download failed {“Piece ID”: “T7LGQNTUZJNZJBURYEUNLPQV23Z7KOHKC22PMIQ4XNPWV2AQ77ZA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_AUDIT”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:502\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

He tells me that on three occasions my node failed to download a file because he couldn’t find it.
I think it’s fair to assume he had some kind of virtualization problem.
I’ll try a test with my NAS by virtualizing Windows this time.
However, the command you gave me ("docker logs storagenode | grep GET_AUDIT | grep failed") returned all the output without isolating anything.
I had to give the command “docker logs storagenode 2> log.txt” followed by “cat log.txt | grep GET_AUDIT | grep failed”. I had to use “2>” because “>” didn’t write the file even though I was in root. This is unusual isn’t it?

That’s my bad for doing it by heart. It should have been.

docker logs storagenode 2>&1 | grep GET_AUDIT | grep failed

Files not existing is obviously a problem. It looks like your node lost files that should have been there. This is what caused the disqualification.

Do yourself a favour and don’t use the mount point directly. Create a subfolder and put the node’s data in there. Might not be related to your current problem, but it prevents the node starting when the hard disk is not mounted.

1 Like

I think it’s related and the reason. It’s easy to check:

  1. Save logs
docker logs storagenode 2>&1 >/path/to/storagenode.log
  1. Stop and remove the container
  2. Unmount the /mnt/storj
  3. Check the mountpoint
sudo ls -l /mnt/storj

If you see files there - this is the answer. You can try to merge only /mnt/storj/storage/blobs with the similar folder on your disk, you need to temporary mount your disk to a different mountpoint and move blobs from a mountpoint /mnt/storj/storage/ to the storage folder on your disk. Unmount temporary mountpoint and mount to the original with sudo mount -a.

Now you can run your node back and decide - would you like to continue run your node if it’s not disqualified on all satellites or would you like to start from scratch.

1 Like

Now it’s been 5 days since the storj node has been running. There have been no errors of any kind with Debian 10 in VM.
The method you want to suggest worked.
Thanks @donald.m.motsinger and @Alexey

2 Likes