Node disqualified?

Hi @john.a Thank you for letting us know that the phrase was difficult to parse. And thank you also, for providing an alternative suggestion. I think your english is great btw :slight_smile:

1 Like

My node is also disq. :frowning: and this it has been online since the V3 Node beginning.
I stopped the node to copy the content to a larger hard drive and then I couldnā€™t get it started. Not even with the ā€œoldā€ hard disk and setting. Then I uninstalled docker and installed the Gui variant and successfully started the node with the larger hard drive.
Now I am disqualified although I wanted to support Storj with time, patience and a bigger hard drive. This is more than just annoying :smirk:

No fun at all when an old node gets DQ.
But we had a good run with the surge tho.:+1:

Are you setting up a new node?

When you migrated from the docker to the GUI, you must specify the storage folder in your data location.

Yes, as a reward, I can start again from scratch

1 Like

the gui installation should be carried out in such a way that such paths are never opened for disqualification.
After the successful start from the node and the status of the dashboard was online, I did not expect disqualification. For me it was pretended that everything was fine.

1 Like

Thanks, It looks like somebody already did suggest email alerts for storage node errors. You can view and vote for this suggestion here: https://ideas.storj.io/ideas/V3-I-157. It could use some help because it only had two votes when I first saw it.

3 Likes

They are working on a more informative dashboard, letā€™s hope that errors like yours will show up there when they occurā€¦

Ok, so I just moved my node from one file server to another, followed the instructions using rsync they way they suggested. Took almost a month to move all of the data, got the node up and running, no apparent issues. Then I wake up the next day to see my node has been disqualified. And now I find out that it is permanent and there is no way to fix it? Also any funds that had been held are forfeit and my only option is just to start over? I like the idea of this system but this implementation is predatory and I think I will take my terabytes of storage space back. The audacity to expect absolutely nothing to go wrong on someones system for 15 months for more than a few hours is ridiculous. I cant say that I can recommend this project anymore, the payments being so low they barely cover electrical costs I stuck with it as an investment and interest in the project. But now I have lost all my time and effort. This is not worth it.

But there were issues of audit failure because of which your node was disqualified. Are you 100% sure you copied all the data and gave correct path when your node was back up ?

Yes I am 100% sure, I spent a month making sure everything copied properly as per storj instructions using rsync. I havent even deleted the old data and node yet just in case, but because damage is already done there is no point. I am no novice at this kind of work. I am a linux admin and software developer. I enterprise grade server hardware I am running on and my data is stored in a zfs raid. Regardless if I did everything right or not, it is still a predatory practice to put a hold on so much of a persons earnings (which I do understand why) but make it so easy for something to go wrong and be disqualified. Now I am kind of wishing I had just saved everything in a virtual hard disk. But again its to late now and I cant condone this type of business practice. I have similar issues with AWS and the way they treat their customers.

You can file a support ticket with help desk and give your node id to find out what went wrong with the audits.

1 Like

It really does not matter when there is no way to fix it. I already figured out what went wrong. Its the fact that there were no signs that there was an issue till its to late and thatā€™s the problem here. The fact that this happened so quickly and there is no way to fix it is a flaw in this system.

Could you please share what went wrong so other SNOs donā€™t end up with same issue ?

The files were being accessed over an NFS share and there were communication issues at some point last night. So storj thought it was looking at an empty folder. But this was only for a few hours while I was asleep when it happened. I cant even start up my old node that I have not deleted yet in case something like this happened. I didnā€™t expect that welp there was a hickup so now you loose everything and have to start over again.

Another suggestion I would make. Donā€™t save the files directly to a hard drive. Put them in a mounted virtual disk. My file copy took over a month to move 10tb over a 10gb network because all the files are under 2.5mb individual files. If you have them in a virtual disk you could just move that one virtual file in a fraction of the time. But I am to jaded with this to test that now.

using NFS is an unsupported and dangerous setup: Disqualified for unknown reason - #19 by Alexey
This has been communicated multiple times.
So you canā€™t really blame storj for thatā€¦
You can read about a lot of DQed nodes using NFS: Topics tagged nfs

About the ā€œempty folder problemā€ on connection loss:
There are several threads about using a subfolder in case the mountpoint vanishes so that the nodes fails. This is indeed an unpleasent problem and as of now there is no solution built into the software but it has been mentioned multiple times and we hope there will be a safety built in some day. Until then the recommendation is to use a subfolder.

7 Likes

I realize that now, and shame on me for not scouring over every single thread. Again I cam calling out how easy it was for this to happen and not be fixable. And this is the first I have heard of a subfolder? I have been running nodes since v1 and this project just seems to keep getting worse for node operators. And I donā€™t blame storj for my NFS experiment not working. I do however blame them for developing a system with so little fault tolerance that I cant even start my old node back up and try something else. This is not some Nintendo game where if you have to start over it really does not matter, there has been significant investment in both time and money that I am just expected to eat. I have lost faith in this project that I was just singing its praise to a friend just the other day.

Ok, so what is your suggestion. Your node has shown that it has lost data. What should the satellite do? Ignore that and just keep dealing with your node?

I get that from your point of view it happened fast and you want a way back. But from the satellites point of view your node has proven to be unreliable and canā€™t be trusted to keep customer data safe.

I bought a 4 bay eSata enclosure the other day and you know what I did before I bought it? I checked the reviews, carefully checked if it would work with my eSatat card, checked if my eSata card supports port multiplyer, checked the maximum supported HDD size and read a lot of reviews.
Why? Because I wanted my system to work and not do some experiment that might fail and result in frustration and more time and money wasted. Especially because I knew that a node can quickly get DQed with unstable setups or that my HDDs could get corrupted (like some did with USB3).

So the sensible thing to do before running experimental setups would be to simply put the word ā€œnfsā€ into the search bar of the forum to see if other people tried this setup before. Then you would have (within a few minutes) realized that your setup would most likely fail.

I understand your frustration and agree that the software isnā€™t perfect and needs to be improved but I canā€™t really help you. You invested time and money into the project but didnā€™t take the time to even read a few minutes in the forum about your setupā€¦ Although you spent 1 month copying files.

Iā€™m sorry to hear that. Itā€™s hard to keep everyone happy, especially the ones using uncommon setups that are not supported.

1 Like