Identity seems intact but when i confirm i get wrong values... Port check tool is a success

docker exec -it storagenode ls -l /app/identity

gives:

total 96
-rw-r--r-- 1 501 dialout  558 Feb 12  2022 ca.1644699961.cert
-rw-r--r-- 1 501 dialout 1088 Feb 12  2022 ca.cert
-rw------- 1 501 dialout  241 Feb 12  2022 ca.key
-rw-r--r-- 1 501 dialout 1096 Feb 12  2022 identity.1644699961.cert
-rw-r--r-- 1 501 dialout 1626 Feb 12  2022 identity.cert
-rw------- 1 501 dialout  241 Feb 12  2022 identity.key

when i try to re-authorize, i get this:

2023/03/10 07:51:36 proto: duplicate proto type registered: node.SigningRequest
2023/03/10 07:51:36 proto: duplicate proto type registered: node.SigningResponse
Error: certificate: authorization already claimed: 
<deleted email address>
:19DDFH..

But when i try to confirm identity from this step:

I get values 1 and 2 instead of 2 and 3…

Since you are starting a new node. I just want to bring this topic to your attention.

Why re-authorize?
With reference to the files and output it is already done since Feb 12 2022. Maybe you forgot?

Only reason is because it is offline and when i confirm that authorization i get 1, 2 values respectively and not 2, 3…

From this step: Step 5. Create an Identity - Storj Node Operator Docs

Step 3 confirm the identity…

I see.
What is your command for this step?
The exact one shown in the instructions?
Does /app/identity/ you pass to docker point to the same location?

Error: certificate: authorization already claimed: 

Mean you already used your auth token.

You can also open the /ca.cert file and you should see 2 blocks with Begin — end —begin — end, three for identity.cert

1 Like

Yes same commands in that step and they return 1, 2 vs 2,3… it is on same location…

I suspect, the reson your node is offline has another reason, because the files with the numbers …1644… only exist if it’s authorized. Did you check your logfile?

If I had to guess, you maybe created another (new) identity, which is not authorized and you are testing this new one.

Just to confirm can you show this line of your docker run command?
--mount type=bind,source="<identity-dir>",destination=/app/identity \
and show
ls -l ~/.local/share/storj/identity/storagenode/

can you confirm those commands again but for Mac OS Docker pls… can’t get them to return anything…

zsh: command not found: --mount

the other one is “No such file or directory”

My bad, I assumed Linux.

That could be another hint. Which Docker Version you have installed? Just in case, did you read this?

Please, install version 2.1.0.5: Docker Desktop Community

All newer versions have various issues, such as losing network connection, have false disk errors and so on as described in this thread: Nodes offline for 3/4 days. Is it possible to recover?

The first one is no command, just wanted to know your settings for your docker run command especially this stated line. See here. Storage Node - Storj Docs

The macOS aqivalent should be
ls -l ~/Library/Application\ Support/Storj/identity/storagenod

Likely you used a different location for check, than for usage in your docker run command.
You need to use the same path to the identity from your docker run command in the check commands instead of ~/Library/Application\ Support/Storj/identity/storagenode/.
You may check it this way

docker exec -it storagenode grep BEGIN /app/identity/ca.cert
docker exec -it storagenode grep BEGIN /app/identity/identity.cert

those commands returned these:

-----BEGIN CERTIFICATE-----

-----BEGIN CERTIFICATE-----

-----BEGIN CERTIFICATE-----

-----BEGIN CERTIFICATE-----

-----BEGIN CERTIFICATE-----

Oops, missed the -c option, sorry.

docker exec -it storagenode grep -c BEGIN /app/identity/ca.cert
docker exec -it storagenode grep -c BEGIN /app/identity/identity.cert

but looks like your identity should be fine.

Please post the last 20 lines from your logs between two new lines with three backticks, like this:

```
logs lines here
```

:frowning: got email already that node is suspended… :frowning:

I get 2, and 3 values now…

and here’s the log…

2023-03-13T00:15:16.973Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "IOEB6IP6UCLWOUPFRCMXOE25PILJ43NAVQP3Y4IT5MO3773GYJFQ"}
2023-03-13T00:15:17.130Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "Z5YHRVPYVPRFYPPCUWNMSBFUA2FK7O2W64TLN3YYRGCFFZ6JNJFQ"}
2023-03-13T00:15:17.253Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "HPFJNA4ZEL5JCGC7RWG2TZ2SFG6XGV6YVARW2JBPTWSVK7RVDLOA"}
2023-03-13T00:15:17.377Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "KKLFT4NCBWXAQQZQPPWHFSIKJ2DGOGBMY6HTIQRQC7OJ5ZNMFRBA"}
2023-03-13T00:15:17.526Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "7SGQXUZDBU6DFXHKHWX7B37HWXK37I2PDJMYYDFDQOUR7VGQ2JOA"}
2023-03-13T00:15:17.609Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "OHOUA46LJ4NFPK6FFXY5MV53D663C6G2HEHCBVDFFCXVJ22QRGJQ"}
2023-03-13T00:15:17.679Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "SQ5DT7VXVT5Q5ID2MNJ3KOHN6OKBLIZRNK3X6PS4EWRX7KTZTIAQ"}
2023-03-13T00:15:17.815Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LEHUGPLWKY4OGVL4D3GCGKXHWJZTNA35DUX7OQNDGC6X4VONW2GA"}
2023-03-13T00:15:17.891Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "GRHTO5HKAZJNQYRJUIPCH7UFDVZFXHUTGUOILN7VVFQGGT6MZAEA"}
2023-03-13T00:15:17.938Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "MGQNLE2NIJN76JHE7Z3FXVVXFQWUQM6YTC6PSNPFQ7LRF5YB6FFQ"}
2023-03-13T00:15:18.060Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LSBLY4KXE7V476TE35O5QTNT7RJZQN52PF4IAEXYRZ5H6TIVAGOA"}
2023-03-13T00:15:18.185Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "YRY3MAUC3OFJBRIR5ZCBBLA3HONLSHOKX2J2CSL6Z53GAR5B2EOQ"}
2023-03-13T00:15:18.264Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "GUS74IOUHNYTBK4JSJZC5PMPNUSZBG3LX5UPG5ZCIZVI6TVM4ETA"}
2023-03-13T00:15:18.323Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "X76EA47EGAOD4P3ZSS5K5L6YCQBSR47OUIKHGHUYZ4KGMKWH4SAA"}
2023-03-13T00:15:18.486Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "NRGYGALP6CWZZP5QQDDEFR3LLLHVSF7FJIIXKGQJ6IDZQ3BZARDA"}
2023-03-13T00:15:18.646Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "SI5NQJQNDC4SOXY2KXY2Y7XISI7RPVNKUA47YWXM7L57TYJLT7HQ"}
2023-03-13T00:15:18.843Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ECNWITEWKG62QX2PP527H7AQXODWC6NNZEGTLDL63AL6X47NRDPQ"}
2023-03-13T00:15:19.195Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "GL52ALZMO5CQ7T6VVO6SUZ5QS2BV7F56QNDKAIETEOALR62HKHGQ"}
2023-03-13T00:15:19.278Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "KKX3SDIOTRDGC77JK5UIESO45UMPURZWUGKWHOS6DLXAFW5IHZYQ"}
2023-03-13T00:15:19.352Z	INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "K3H6KBROKOM2DRAAMIZEV4ZL4Q7X6ING3SCV7UBBP27L7ICWSS4Q"}
2023-03-13T00:15:19.353Z	INFO	collector	collect	{"Process": "storagenode", "count": 195}
2023-03-13T00:15:25.989Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2023-03-13T00:15:50.939Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-03-13T00:16:11.094Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-03-13T00:16:45.690Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-03-13T00:17:01.211Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-03-13T00:17:01.432Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-03-13T00:17:03.311Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-03-13T00:17:03.952Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-03-13T00:17:03.952Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-03-13T00:17:04.136Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 184.144.45.33:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:147\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:101\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

The port is still closed. Please check that WAN IP on your router matches IP on Open Port Check Tool - Test Port Forwarding on Your Router.
If you use a DDNS hostname - make sure it’s updated (replace external.address.net to your actual DDNS address):

nslookup external.address.net 8.8.8.8

The resolved IP should match WAN IP and IP on yougetsignal, if not - please check the DDNS updater on your router (usually in DDNS section) and that’s your DDNS subscription is not expired.