No data since the last wipe

Hi, I hope everything continues ok, but no data since the last wipe.


root@rock64:~# docker info -f ‘{{.OSType}}/{{.Architecture}}’
linux/aarch64

root@rock64:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
storjlabs/storagenode beta 5f376c43c578 2 days ago 28MB
storjlabs/watchtower latest-arm32v6 8e786d6152eb 5 months ago 8.73MB

root@rock64:~# docker image inspect 5f376c43c578 -f ‘{{.Architecture}}’
arm64


Storage Node Dashboard ( Node Version: v0.20.1 )

======================

ID 1Uw…
Last Contact 0s ago
Uptime 12h37m18s

               Available     Used     Egress     Ingress
 Bandwidth       30.0 TB      0 B        0 B         0 B (since Sep 1)
      Disk        0.8 TB      0 B

Bootstrap bootstrap.storj.io:8888
Internal 127.0.0.1:7778
External xxx.yyy.xxx:28967


http://storjnet.info/@1Uw


https://www.yougetsignal.com/tools/open-ports/

Open Port 28967 is open on xxx.yyy.xxx.


root@rock64:~# docker exec storagenode wget -qO - http://localhost:14002/api/dashboard | jq
{
“data”: {
“nodeID”: “1Uw…”,
“wallet”: “0x000…”,
“satellites”: [
“12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”,
“118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”,
“121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”,
“12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”
],
“diskSpace”: {
“used”: 0,
“available”: 800
},
“bandwidth”: {
“egress”: {
“repair”: 0,
“audit”: 0,
“usage”: 0
},
“ingress”: {
“repair”: 0,
“usage”: 0
},
“used”: 0,
“available”: 30000
},
“version”: {
“major”: 0,
“minor”: 20,
“patch”: 1
},
“upToDate”: true
},
“error”: “”
}


{
“data”: {
“id”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”,
“storageDaily”: null,
“bandwidthDaily”: null,
“audit”: {
“totalCount”: 1824,
“successCount”: 855,
“alpha”: 812.3,
“beta”: 920.55,
“score”: 0.4687653287936059
},
“uptime”: {
“totalCount”: 25303,
“successCount”: 24847,
“alpha”: 1000,
“beta”: 0,
“score”: 1
}
},
“error”: “”
}
{
“data”: {
“id”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”,
“storageDaily”: null,
“bandwidthDaily”: null,
“audit”: {
“totalCount”: 4441,
“successCount”: 3791,
“alpha”: 11.974738784748437,
“beta”: 8.025261215251554,
“score”: 0.5987369392374221
},
“uptime”: {
“totalCount”: 53439,
“successCount”: 52667,
“alpha”: 99.99605994766956,
“beta”: 0.003940052330317241,
“score”: 0.9999605994766968
}
},
“error”: “”
}
{
“data”: {
“id”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”,
“storageDaily”: null,
“bandwidthDaily”: null,
“audit”: {
“totalCount”: 1761,
“successCount”: 1751,
“alpha”: 11.974738784767581,
“beta”: 8.02526121523242,
“score”: 0.598736939238379
},
“uptime”: {
“totalCount”: 14950,
“successCount”: 14751,
“alpha”: 99.99999999999132,
“beta”: 8.569110149774706e-12,
“score”: 0.9999999999999143
}
},
“error”: “”
}
{
“data”: {
“id”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,
“storageDaily”: null,
“bandwidthDaily”: null,
“audit”: {
“totalCount”: 3649,
“successCount”: 3136,
“alpha”: 11.747257661305484,
“beta”: 8.2527423386945,
“score”: 0.5873628830652746
},
“uptime”: {
“totalCount”: 29224,
“successCount”: 28568,
“alpha”: 99.9999999999992,
“beta”: 6.294706190326696e-15,
“score”: 1
}
},
“error”: “”
}


root@rock64:~# docker logs --details -f storagenode
2019-09-06T21:56:03.159Z INFO Configuration loaded from: /app/config/config.yaml
2019-09-06T21:56:03.165Z INFO Operator email: xxx@yyy.com
2019-09-06T21:56:03.166Z INFO operator wallet: 0x151…
2019-09-06T21:56:06.362Z INFO version running on version v0.20.1
2019-09-06T21:56:06.368Z INFO db.migration Latest Version {“version”: 19}
2019-09-06T21:56:06.371Z INFO bandwidth Performing bandwidth usage rollups
2019-09-06T21:56:06.372Z INFO Node 1Uw… started
2019-09-06T21:56:06.372Z INFO Public server started on [::]:28967
2019-09-06T21:56:06.373Z INFO Private server started on 127.0.0.1:7778
2019-09-06T21:56:06.377Z INFO piecestore:monitor Remaining Bandwidth {“bytes”: 30000000000000}
2019-09-06T21:56:06.498Z INFO version running on version v0.20.1
2019-09-06T21:56:06.530Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:01:19.300Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:06:30.297Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:11:06.979Z INFO version running on version v0.20.1
2019-09-06T22:11:39.121Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:16:58.291Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:22:09.952Z ERROR server gRPC unary error response {“error”: “rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3XC8ttk”}
2019-09-06T22:26:06.928Z INFO version running on version v0.20.1
2019-09-06T22:27:22.988Z ERROR server gRPC unary error response {“error”: "rpc error: code = PermissionDenied desc = untrusted peer 1QzDKGHDeyuRxbvZhcwHU3syxTYtU1jHy5duAKuPxja3

nothing interesting…


Any comment would be welcome.

Thanks

Your audit scores dropped below the threshold which is why your node is paused. Did you lose any data?

No, since the last wipe day simply no new data.
Before the wipe the disk was full.

I’m using a rock64 plus an external usb 3.0 2.5 1TB hdd with no issues.

If my node is paused, the solution is wait?

It won’t unpause by itself. You can send an email to support@storj.io but you should first try to figure out how you have failed a significant amount of audits. This likely happened prior to the wipe. It could be caused by lost data or data being inaccessible. Try to find out and include everything you know in the email including your node id and run command.

Please, make sure that you have replaced the dangerous -v options to the safe --mount, as specified in the documentation.
And make sure that you mounted your drive statically via /etc/fstab

root@rock64:~# cat run.sh
#!/bin/bash
# /mnt/1tb/storj/config.yaml
docker stop -t 300 storagenode
docker rm storagenode
docker run -d
–name storagenode
–restart unless-stopped
-p 28967:28967
-p 14002:14002
-e WALLET=“0x151…”
-e EMAIL="aaa@bbb.com"
-e ADDRESS=“xxx.yyy.zzz:28967”
-e BANDWIDTH=“30TB”
-e STORAGE=“838GB”
–mount type=bind,source=/root/.local/share/storj/identity/storagenode,destination=/app/identity
–mount type=bind,source=/mnt/1tb/storj,destination=/app/config
storjlabs/storagenode:beta

root@rock64:~# cat /etc/fstab

/dev/sda1 /mnt/1tb xfs defaults 0 2

Good, please, send an email to the support@storj.io or create a ticket on https://support.storj.io with your info and link to that topic.

Perhaps the problem was this:


https://documentation.storj.io/setup/storage-node

STORAGE : how much disk space you want to allocate to the Storj network

Be sure not to over-allocate space! Allow at least 10% extra for overhead.

If you over-allocate space, you may corrupt your database when the system attempts to store pieces when no more physical space is actually available on your drive.

The minimum storage shared requirement is 500 GB, which means you need a disk of at least 550 GB total size to allow for the 10% overhead.


But, seems okay:

root@rock64:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 932G 33M 932G 1% /mnt/1tb

93,2 for overhead

932 - 93,2 = 838,8

-e STORAGE=“838GB”


Ticket created.

Thanks.

Hi,

finally the support solved my issue.

Hello edagner,
We’re happy to inform you that your node has been reinstated now on the satellites it had been paused on. Should you experience a repeat of your failed audit issue after your node was reinstated, please file a follow-up ticket so we can rectify the problem. Thank you for your patience!

Best regards,
Aleksey Leonov
Storj Labs Support

Hello,

I have a similar problem with the latest v0.20.1 (beta) version.

Last Contact 1s ago
Uptime       27m24s

                   Available         Used     Egress     Ingress
     Bandwidth        2.0 TB          0 B        0 B         0 B (since Sep 1)
          Disk      555.3 GB     444.7 GB

Bootstrap bootstrap.storj.io:8888
Internal  127.0.0.1:7778
External  rubi.hopto.org:28967

Any guidance appreciated. Thanks

Hey @Lighthouse,
Welcome to the community!

Could you have a look at the API’s outlined here Storage node dashboard API
If any of the audits scores are below 0.60 your node has failed to many audits and is paused.

Thank you for that. I believe thats what has happened. Here is what my response looks like.

{
    "data": {
        "id": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW",
        "storageDaily": [
            {
                "atRestTotal": 57.0804696281751,
                "timestamp": "2019-09-02T00:00:00Z"
            },
            {
                "atRestTotal": 918398.4497988423,
                "timestamp": "2019-09-03T00:00:00Z"
            },
            {
                "atRestTotal": -918257.7938966552,
                "timestamp": "2019-09-04T00:00:00Z"
            },
            {
                "atRestTotal": 2425491.016765555,
                "timestamp": "2019-09-05T00:00:00Z"
            },
            {
                "atRestTotal": -2425686.470040699,
                "timestamp": "2019-09-06T00:00:00Z"
            },
            {
                "atRestTotal": 69360394.8939209,
                "timestamp": "2019-09-02T00:00:00Z"
            },
            {
                "atRestTotal": 9111405742.938883,
                "timestamp": "2019-09-03T00:00:00Z"
            },
            {
                "atRestTotal": -8742308846.275513,
                "timestamp": "2019-09-04T00:00:00Z"
            },
            {
                "atRestTotal": -863060577.0000407,
                "timestamp": "2019-09-05T00:00:00Z"
            },
            {
                "atRestTotal": -1162277409.3426106,
                "timestamp": "2019-09-06T00:00:00Z"
            },
            {
                "atRestTotal": 214864.00520706177,
                "timestamp": "2019-09-02T00:00:00Z"
            },
            {
                "atRestTotal": -229002569.53685632,
                "timestamp": "2019-09-03T00:00:00Z"
            },
            {
                "atRestTotal": 228188068.43659082,
                "timestamp": "2019-09-04T00:00:00Z"
            },
            {
                "atRestTotal": 121333844.87075043,
                "timestamp": "2019-09-05T00:00:00Z"
            },
            {
                "atRestTotal": -119990024.71142451,
                "timestamp": "2019-09-06T00:00:00Z"
            }
        ],
        "bandwidthDaily": null,
        "audit": {
            "totalCount": 928,
            "successCount": 918,
            "alpha": 11.97473878476754,
            "beta": 8.02526121523242,
            "score": 0.5987369392383782
        },
        "uptime": {
            "totalCount": 9272,
            "successCount": 6818,
            "alpha": 99.9999999999579,
            "beta": 4.201639575564444e-11,
            "score": 0.9999999999995798
        }
    },
    "error": ""
}

Now, I do believe I have had decent uptimes so not sure why it would drop so low. Any idea how/where to create support ticket? I guess I can also ask/comment on that question too.

Thanks again !

Your node paused for too many failed audits (lost or unreadable data), your uptime is a different metric and it’s fine.
You can send an email to the support@storj.io or create a ticket directly on https://support.storj.io
Send your NodeID and your full docker run command.

3 Likes