Node disqualified...WHY?


I open this thread, because I wrote to support and I obtanined a standard copy/paste answer and I wish to know why I’ve disqualified.

My ID is 1gVDzj4WBjZy4iptXD6zPwDebP5GRTydmCwTeTCkHEs9VRRiCE and this computer has been switched on every hour in every day for months, with the minimal reboots for updating (he have an UPS to preventing gross blackouts). I have the v0.33.4 version an I live in Spain.

Yesterday, checking my payouts it I found that I’d disqualified.

In the same internet fibre line I have another testing node with another ID and ports, and he is working OK. I fear that this another node could be disqualified too if we don’t find why I was disqualified.

This is my audit score

And these are muy Uptime and Audit checks in the Stats webpage:
Uptime checks: 95.7%
Audit checks: 99.8%
Uptime checks: 96.6%
Audit checks: 100%
Uptime checks: 96.9%
Audit checks: 99.8%
Uptime checks: 99.1%
Audit checks: 98.8%
Uptime checks: 97.4%
Audit checks: 99.8%

I have an almost dedicated Windows 10 node with Docker. It is with Docker from the beggining. And I didn’t tried to migrate to Windows GUI, another computer or wherever you could to migrate a node.

This computer have from months an UPS, and his reboots are limited to Windows Update one time at month.

I don’t deleted anything. The HDD that has the storj information is dedicated and new brand. Sumarizing, I feel that I’m a good SNO, but I’m disqualified with a good reason.

In this moment, if I do a “docker logs -t storagenode --tail 200 -f” I see uploads and downloads, and the node is ONLINE in my dashboard.

Please, I need to know WHY, I want to RECOVER it.


If you see uploads and downloads then you´re not DQ´ed!! You might be only DQ´ed on the satellite where your score is very low, but not on the others.


I see this:


And this:

But I see upload and download :frowning:

You’re still getting uploads from the one remaining satellite that you are not yet DQ-ed on.

Your score parameters indicate that your node has lost a lot of data. And, the Beta parameter on the one remaining satellite is starting to rise… possibly indicating that data loss is starting to occur followed by DQ soon.

It might be advisable to stop your node and make sure your data drive is still connected and mounted properly.

1 Like

Thank you. My HDD is correctly connected. I’ve done a chkdsk and it’s perfect, and SMART is OK. :frowning:

You should have failed audits in the log, what do the log entries specify as the reason?

How I can see it? I’ve find some scripts and PS commands, but nothing works. I have Windows 10 and Docker node


My guess is that Docker lost the mount to the drive at some point. Since the Saltlake sat (1wF…) has an okay score, it was probably before that satellite came online, around February 12 I think. Unfortunately, and it does suck, there is nothing to be done. My suggestion would be to start over with the windows GUI installation this time and avoid docker all together. You will need to create a new identity.


Id recommend switching away from docker as its known to cause issues running a node in windows, With your next setup I would just use windows GUI is much more stable then docker is.

Also theres no recovering from it once you get DQed thats it you gotta start over.

1 Like

Did you still use the -v mounting options in the docker run command? This could lead to the mount not initiating properly and the node starting with a temporary volume instead. The result of this is missing data and failing audits.
Btw, the latest version of the earnings calculator will name the -UNKNOWN- satellite correctly.

1 Like

I’ve reviewed my mounting…

docker run -d --restart unless-stopped -p 28967:28967 -p 14002:14002 -e WALLET=“0xxxxx” -e EMAIL="" -e ADDRESS=“” -e BANDWIDTH=“200TB” -e STORAGE=“4TB” -v “C:\storj\storagenode\”:/app/identity -v “D:\storj\StorjData\”:/app/config --name storagenode storjlabs/storagenode:alpha

and yes, there is a “-v”. But it is there from long time ago.

PS: I still persist because I need to know what ocurred to prevent a new problem. It’s frustrating to delete all and start again after 8 months.

Then I’d say there’s your issue. -v is not used in a very long time. You have to use -mount. But someone from the team please explain better.
This is my command:
sudo docker run -d --restart always -p 28967:28967
-e WALLET=“xxxxxxxx”
-e EMAIL=""
-e STORAGE=“1.8TB”
–mount type=bind,source=/media/pi/HDD/Dados,destination=/app/identity
–mount type=bind,source=/media/pi/HDD/Storage,destination=/app/config
–name storagenode storjlabs/storagenode:beta

1 Like

Previous versions of the docker run command that used the -v rather than the --mount option will not work properly. Copy the updated command below.

1 Like

You should not use -v, with this option if your data folder is not mounted for any reason the docker will create an empty volume for you and will store customers’ data in the container instead of the disk.
Your node start failing audits for previous data, which is not available for that container.
When you remove such container (for upgrade for example), it will be removed with customers data and your node will start to fail audits for data being removed.

This is why we replaced the documentation 8 months ago:

In addition you should use the beta tag instead of alpha, it’s in the updated docker run command too.

This warning has been sent to all SNO via email back in August 2019 and then remain in any update email until at least December 2019:

Important note about mounting your node hard drive.

In the early stages of our alpha, we had incorrect instructions for how to mount your node in the Explorer Setup Guide on Github. If you’re an affected user, you’ll see -v instead of --mount in your docker run storagenode command to point to the location of their identity and storage folders. Please update your docker run command using these instructions.


Thank you guys.

I’m sure I changed this “-v” to “–mount” when I received the warning mail, because I remember it. But in any step (when I changed my wallet for example) I copied an old command to registrate again the container putting again “-v”.


It happens, but I think you found the most likely culprit for the loss of data. The upside is that there is now a windows GUI install which gets rid of all issues that come with docker on windows. It is a far better option and should be much more stable for you. So I hope you’ll give that a try!