Node disqualification

Hello Alexey and other admins!

Alexey, because of some reason, one of my oldest node (about 1 year) has been disqualifyed without any reason! I have reinstalled Windows and connect node back yesterday and now I see this:


20 lines of logs:
C:\Users\Alexander>docker logs --tail 20 storagenode
2023-05-15T08:26:13.639Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “JXJPILBC4FEK7YXMNO4JQJRLOTHMWGSDMT4X6PV4TKONRMGWJ4LQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473942144, “Remote Address”: “172.17.0.1:40604”}
2023-05-15T08:26:13.979Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “JXJPILBC4FEK7YXMNO4JQJRLOTHMWGSDMT4X6PV4TKONRMGWJ4LQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 102400, “Remote Address”: “172.17.0.1:40604”}
2023-05-15T08:26:14.709Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “OC2AIDYBF3GWFRZ2NGMFGPVDQ2FWR2AOATAMYKGX5P7T27PURN2A”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473839232, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:14.769Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “OC2AIDYBF3GWFRZ2NGMFGPVDQ2FWR2AOATAMYKGX5P7T27PURN2A”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 6912, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:15.494Z INFO piecestore download started {“Process”: “storagenode”, “Piece ID”: “DKGK2JZO7BPPPOBAZ4TSAJACNRJMIBYBR7NTIBUXYFFLBZPORAFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”, “Offset”: 212992, “Size”: 2560, “Remote Address”: “172.17.0.1:41150”}
2023-05-15T08:26:15.624Z INFO piecestore downloaded {“Process”: “storagenode”, “Piece ID”: “DKGK2JZO7BPPPOBAZ4TSAJACNRJMIBYBR7NTIBUXYFFLBZPORAFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”, “Offset”: 212992, “Size”: 2560, “Remote Address”: “172.17.0.1:41150”}
2023-05-15T08:26:15.924Z INFO piecestore download started {“Process”: “storagenode”, “Piece ID”: “5GACRRNXAAKHJXWQZP4X7STQZ7GYHDSZ7CR5PBHXWBVVPTKRMUFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”, “Offset”: 204288, “Size”: 2560, “Remote Address”: “172.17.0.1:41152”}
2023-05-15T08:26:15.995Z INFO piecestore download canceled {“Process”: “storagenode”, “Piece ID”: “5GACRRNXAAKHJXWQZP4X7STQZ7GYHDSZ7CR5PBHXWBVVPTKRMUFA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”, “Offset”: 204288, “Size”: 0, “Remote Address”: “172.17.0.1:41152”}
2023-05-15T08:26:17.412Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “JGGFNHOBUBT24GWDTWAE56S4E523QJ7THGTZCAJPW5NKCL7N57WA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473831808, “Remote Address”: “172.17.0.1:40604”}
2023-05-15T08:26:17.927Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “JGGFNHOBUBT24GWDTWAE56S4E523QJ7THGTZCAJPW5NKCL7N57WA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 86528, “Remote Address”: “172.17.0.1:40604”}
2023-05-15T08:26:20.077Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “Q2N3SH4IL42UBNRJ5B76JWH3ANXHKNIUKS4LNSG4EOPNCMVDJNGA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473744768, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:20.140Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “Q2N3SH4IL42UBNRJ5B76JWH3ANXHKNIUKS4LNSG4EOPNCMVDJNGA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 4864, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:21.893Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “G4GWM427QAVKD636VI6HSKDO3OBINSPSOZWWHSGBIIWFXOP6DQIQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473739392, “Remote Address”: “172.17.0.1:41154”}
2023-05-15T08:26:22.021Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “G4GWM427QAVKD636VI6HSKDO3OBINSPSOZWWHSGBIIWFXOP6DQIQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 181504, “Remote Address”: “172.17.0.1:41154”}
2023-05-15T08:26:24.253Z INFO piecestore download started {“Process”: “storagenode”, “Piece ID”: “UUVD2HXPGY3LP7RSMOQ3BQ3M64543D2AM2WIABI25SNTDT5GEXUQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET”, “Offset”: 0, “Size”: 8960, “Remote Address”: “172.17.0.1:41156”}
2023-05-15T08:26:24.402Z INFO piecestore downloaded {“Process”: “storagenode”, “Piece ID”: “UUVD2HXPGY3LP7RSMOQ3BQ3M64543D2AM2WIABI25SNTDT5GEXUQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET”, “Offset”: 0, “Size”: 8960, “Remote Address”: “172.17.0.1:41156”}
2023-05-15T08:26:24.735Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “PYXP23E3TRM73I66GCYNSGZWIFMIEUXFMPI5I76655LRLNIBEY2Q”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 860473557376, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:24.797Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “PYXP23E3TRM73I66GCYNSGZWIFMIEUXFMPI5I76655LRLNIBEY2Q”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 4352, “Remote Address”: “172.17.0.1:41142”}
2023-05-15T08:26:25.697Z INFO piecestore upload started {“Process”: “storagenode”, “Piece ID”: “G5KRAYBKLNCFAO3TLTUIV42X5YQVMUPR6VS2GMTXPWSQPBJOQOVA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT_REPAIR”, “Available Space”: 860473552512, “Remote Address”: “172.17.0.1:41158”}
2023-05-15T08:26:25.836Z INFO piecestore uploaded {“Process”: “storagenode”, “Piece ID”: “G5KRAYBKLNCFAO3TLTUIV42X5YQVMUPR6VS2GMTXPWSQPBJOQOVA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT_REPAIR”, “Size”: 104448, “Remote Address”: “172.17.0.1:41158”}

Node has been operated under Win GUI. Another node on same mashine running under Docker works fine

It is very sad. Could you, please, help me to fix this!

Regards,
Alexander

That’s not how the universe works.

You have poor Online and Audit scores. 20 last log lines are (almost) meaningless if they contain regular node stuff but not warnings nor errors. You need to provide more or just look specifically for warnings, errors and fatals.

Also:

So the failing node is Windows GUI one, but you provide logs from the Docker one?

Also:

So it was disqualified first or did you reinstall first? After reinstall it was still running as a Windows service (Windows GUI install)?

When providing logs, please embed them within three ticks like this:
```
logs go here
```
so they are actually human-readable.

1 Like

Additionally to what @mars_9t has already said, it would be very helpful if you tell exactly what you did when you “reinstalled” Windows and “connected back” the node. Because it sounds like these actions have caused the problems that have led to the disqualification.

1 Like

Yep. You are right. There should be reason, but it is unclear for me.
Proper logs:

PS C:\Users\Alexander> Get-Content "$env:ProgramFiles/Storj/Storage Node/storagenode.log" -Tail 20 -Wait
2023-05-15T10:48:29.242+0300    INFO    piecestore      download started        {"Piece ID": "YZPWLTH47WWKL6PVC7V75NGKZK5P3LVQQFZDIZADG24EEAV6VBAQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 102912, "Size": 256, "Remote Address": "34.23.202.176:34454"}
2023-05-15T10:48:29.244+0300    ERROR   piecestore      download failed {"Piece ID": "YZPWLTH47WWKL6PVC7V75NGKZK5P3LVQQFZDIZADG24EEAV6VBAQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 102912, "Size": 0, "Remote Address": "34.23.202.176:34454", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:75\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:650\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:251\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2023-05-15T11:24:11.813+0300    INFO    bandwidth       Performing bandwidth usage rollups
2023-05-15T11:24:25.604+0300    INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       sending {"count": 1}
2023-05-15T11:24:26.048+0300    INFO    orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo       finished
2023-05-15T11:30:34.862+0300    INFO    piecestore      download started        {"Piece ID": "EFXUBWHFAGL2X4VGXFBYFAOJCCPD5KQAD47VZDPNR5APP3LYNTFQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 126976, "Size": 256, "Remote Address": "34.23.202.176:48466"}
2023-05-15T11:30:34.863+0300    ERROR   piecestore      download failed {"Piece ID": "EFXUBWHFAGL2X4VGXFBYFAOJCCPD5KQAD47VZDPNR5APP3LYNTFQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 126976, "Size": 0, "Remote Address": "34.23.202.176:48466", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:75\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:650\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:251\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2023-05-15T11:53:08.568+0300    INFO    piecestore      download started        {"Piece ID": "K7HIAWUIQMCH7UTRJFR4V5EPQO4NIMNBN5SEUWJOBBZGI23ZKGAQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 392960, "Size": 256, "Remote Address": "34.23.202.176:52114"}
2023-05-15T11:53:08.568+0300    ERROR   piecestore      download failed {"Piece ID": "K7HIAWUIQMCH7UTRJFR4V5EPQO4NIMNBN5SEUWJOBBZGI23ZKGAQ", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "Offset": 392960, "Size": 0, "Remote Address": "34.23.202.176:52114", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:75\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:650\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:251\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2023-05-15T12:24:11.811+0300    INFO    bandwidth       Performing bandwidth usage rollups
2023-05-15T13:24:11.805+0300    INFO    bandwidth       Performing bandwidth usage rollups
2023-05-15T14:58:45.437+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:45.329+0300    WARN    trust   Failed to fetch URLs from source; used cache    {"source": "https://www.storj.io/dcs-satellites", "error": "HTTP source: Get \"https://www.storj.io/dcs-satellites\": dial tcp: lookup www.storj.io: no such host", "errorVerbose": "HTTP source: Get \"https://www.storj.io/dcs-satellites\": dial tcp: lookup www.storj.io: no such host\n\tstorj.io/storj/storagenode/trust.(*HTTPSource).FetchEntries:68\n\tstorj.io/storj/storagenode/trust.(*List).fetchEntries:90\n\tstorj.io/storj/storagenode/trust.(*List).FetchURLs:49\n\tstorj.io/storj/storagenode/trust.(*Pool).fetchURLs:251\n\tstorj.io/storj/storagenode/trust.(*Pool).Refresh:188\n\tstorj.io/storj/storagenode/trust.(*Pool).Run:119\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-05-15T14:58:45.508+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:45.508+0300    INFO    bandwidth       Performing bandwidth usage rollups
2023-05-15T14:58:45.509+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup europe-north-1.tardigrade.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup europe-north-1.tardigrade.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:45.560+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us2.storj.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us2.storj.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:45.570+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:45.589+0300    ERROR   contact:service ping satellite failed   {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: no such host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: no such host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2023-05-15T14:58:52.638+0300    INFO    trust   Scheduling next refresh {"after": "8h6m23.466525864s"}

What I did:

  1. Reason of OS reinstalling was mistake 0xc0000000e on “old” windows, it was Win11, I have tried to repair it with official tool on memory stick, but fault

  2. During repairs, I have disconnected all drives except system C

  3. Identity were saved on same HDD with Node and also separately in backup in another place. I have used Identity from backup after checking what both identies have same size and data, both were equal

  4. I have installed Win10, Chrome browser

  5. Connected all HDD back

  6. I have installed WinGUI as usual, add firewall rule

  7. After starting of a node, some time, maybe 30 minutes, QUIC status was offline, after it become OK

Both identities:

Basically this is it

Regards,
Alexander

These errors show you’ve lost data. Data was requested and the file is not accessible. This is why your Audit score dropped and then went too low causing your node to be disqualified.


These errors point to a DNS or connectivity issue as well, but wouldn’t cause disqualification.

2 Likes

Does it means “all over”? I have no idea, why most stable node, worked normal 1 year, become to loose files

Regards,
Alexander

These logs are from now, when node is already disqualified. I am not that familiar with how node can behave in such condition (no DQ for me, yet), but there is a file does not exist error. Are you sure the node’s config has correct directory set? Drive letter could’ve changed. Or maybe you pointed it to wrong folder by mistake? On Windows, with node as a service, it should point to the one where blob, garbage, temp, trash folders and all the databases are.

More logs
Thanks, you are absolutly right, drive letter was wrong


Is it possible to fix disqualification?

Regards,
Alexander

1 Like

Yes, disqualification is final. There is no returning.


You can use the log entry to check if the file exist or not…

If you check in your blobs folder for the relevant file…

“Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo” = folder name “arej6usf33ki2kukzd5v6xgry2tdr56g45pp3aao6llsaaaaaaaa”

“Piece ID”: “K7HIAWUIQMCH7UTRJFR4V5EPQO4NIMNBN5SEUWJOBBZGI23ZKGAQ” means folder is K7 then the file with same name is within:

path\blobs\arej6usf33ki2kukzd5v6xgry2tdr56g45pp3aao6llsaaaaaaaa\k7\K7HIAWUIQMCH7UTRJFR4V5EPQO4NIMNBN5SEUWJOBBZGI23ZKGAQ.sj1

Does it exist? Or not?

Reference - Satellite info (Address, ID, Blobs folder, Hex)

Don’t like to be in this case :confused:

Unfortunately it is permanent. Node is dead. Need to start over.

[*]

It looks very strange. Such stupid mistake whoever can do. Why not to make some allert or give time to node operator to fix it?

Regards,
Alexander

There is a check in place for a similar event (storage-dir-verification file). The problem was you effectively installed a brand new node using the same identity but to the new path, which recreated the check file in the new data location.

If you had been monitoring the ERRORs in the log file, or the scores in the dashboard, you could have fixed the issue before disqualification as well.

From Storj’s point of view your node just looked like a node which lost data.

2 Likes

Maybe I could. I did this check before for different node and different reasons, stopped node and waited for comment. But comment were some “it is not what kind of mistake…” and it didn’t bring any harm, so I did’t care. Is it possible to see all my nodes’ status (with different ip and in different locations)?

Regards,
Alexander

пн, 15 мая 2023 г. в 17:28, Stob via Storj Community Forum (official) <storj@literatehosting.com>:

Yes, it’s possible: [Tech Preview] Multinode Dashboard Binaries

A post was merged into an existing topic: [Tech Preview] Multinode Dashboard Binaries