Strange node issues - container going offline

So, DNS resolver seems to be working. Have you tried the same with one of the sat urls such as europe-west-1.tardigrade.io? I expect the result will be the same, but it would put any potential DNS resolver issues to rest.

The next time this happens, instead of restarting the node, leave the node running and restart your modem/router (if possible).

Iā€™ve already restarted my modem

The resolution was also possible:
    root@storjnode:~# docker exec -it storagenode ping -c 10 europe-west-1.tardigrade.io

    PING europe-west-1.tardigrade.io (130.211.98.145): 56 data bytes

    64 bytes from 130.211.98.145: seq=0 ttl=44 time=24.996 ms

    64 bytes from 130.211.98.145: seq=1 ttl=44 time=23.526 ms

    64 bytes from 130.211.98.145: seq=2 ttl=44 time=23.475 ms

    64 bytes from 130.211.98.145: seq=3 ttl=44 time=23.490 ms

docker logs storagenode 2>&1 | head -n 30

root@storjnode:~#

root@storjnode:~# docker logs storagenode 2>&1 | head -n 30

2020-02-11T21:38:49.133Z INFO Configuration loaded from: /app/config/config.yaml

2020-02-11T21:38:49.136Z INFO Operator email: xxx

2020-02-11T21:38:49.136Z INFO operator wallet: 0x8b51C0ecf0381C41e1b299EBFafA352997983741

2020-02-11T21:38:49.753Z INFO version running on version v0.31.12

2020-02-11T21:38:49.817Z INFO db.migration Database Version {"version": 31}

2020-02-11T21:38:50.784Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock.

2020-02-11T21:38:52.476Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.

2020-02-11T21:38:52.479Z INFO trust Scheduling next refresh {"after": "3h11m14.87110877s"}

2020-02-11T21:38:52.482Z INFO bandwidth Performing bandwidth usage rollups

2020-02-11T21:38:52.483Z INFO Node 12B6nqxC8rgMDGJQS7piieqwq6tk8y83vu167a7ryeuELX49DE2 started

2020-02-11T21:38:52.484Z INFO Public server started on [::]:28967

2020-02-11T21:38:52.485Z INFO Private server started on 127.0.0.1:7778

2020-02-11T21:38:52.504Z INFO piecestore:monitor Remaining Bandwidth {"bytes": 1999986123508736}

2020-02-11T21:38:52.598Z INFO version running on version v0.31.12

2020-02-11T21:39:25.171Z INFO piecestore upload started {"Piece ID": "QJIXKDBPORVNKNAWAX65QMLVGHAECJMO7X25PYNCX47RPS24GRSA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT"}

2020-02-11T21:39:25.851Z INFO piecestore upload failed {"Piece ID": "QJIXKDBPORVNKNAWAX65QMLVGHAECJMO7X25PYNCX47RPS24GRSA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:483\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-02-11T21:39:39.153Z INFO piecestore upload started {"Piece ID": "2TANTF6XQUWQL45AGGHTS7WB57T3NAPE75UPLXU7XRBS4RN5RT7A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT"}

2020-02-11T21:39:40.822Z INFO piecestore upload failed {"Piece ID": "2TANTF6XQUWQL45AGGHTS7WB57T3NAPE75UPLXU7XRBS4RN5RT7A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:483\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-02-11T21:40:09.816Z INFO piecestore upload started {"Piece ID": "KBHIVANR56ON77GC7SNIOY3CPLC5KC222EMU62MI4XM2J6NSWT3Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT"}

2020-02-11T21:40:10.481Z INFO piecestore upload failed {"Piece ID": "KBHIVANR56ON77GC7SNIOY3CPLC5KC222EMU62MI4XM2J6NSWT3Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:483\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-02-11T21:40:11.400Z INFO piecestore upload started {"Piece ID": "RYD3ELHOAAG7MBKZWPYINPIGZSOWVMJDGMH2BSTP4GGQAL6MPPIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT"}

2020-02-11T21:40:11.973Z INFO piecestore upload failed {"Piece ID": "RYD3ELHOAAG7MBKZWPYINPIGZSOWVMJDGMH2BSTP4GGQAL6MPPIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:483\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

2020-02-11T21:40:21.931Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {"count": 10}

2020-02-11T21:40:21.931Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {"count": 22}

2020-02-11T21:40:21.931Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {"count": 37}

2020-02-11T21:40:21.931Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {"count": 5}

2020-02-11T21:40:22.170Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished

2020-02-11T21:40:22.384Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished

2020-02-11T21:40:22.651Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished

2020-02-11T21:40:23.156Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished

Very nice start to storagenode, is it offline again?
I donā€™t think so because:
Port 28967 is open on storjnode2.ddnss.de

It is offline again since yesterday, i did also a docker update but i didnā€™t help too.
Next week i want to check different network equipment

Do you have any firewall? Especially on ISP side?
Also, please, check your DDNS updater (it could be a program on your PC or configured router), which should update the storjnode2.ddnss.de, when your public IP is updated.

I donā€™t have any strange firewall between the node and public internet.
Iā€™ve also had a look on the dns update times but they were okay like below 10 seconds.
Iā€™ve changed today the network equipment lets see if itā€™s helpful.

Iā€™ve changed my network equipment, since 3 days itā€™s stable now.

1 Like