Dashboard extreme slow

Hi folks.

After watchdog updated to the latest image today (Node Version: v0.14.11) i tried to look into the dashboard if everything is running fine.

Running docker exec -it storagenode_1 /app/dashboard.sh takes about 30-60 seconds to start and afterwards it´s not updating the view anymore.

Hardware looks clean
MEM USAGE / LIMIT
180.2MiB / 7.8GiB

CPU %
0.47%

Does anyone run into the same problems?

best regards
Michael

I noticed mine takes a much longer time to open too, with low CPU usage. But, I have high RAM allocation (80% ish used by storj)

My node upgraded yesterday to 0.14.11 and no issues.
Dashboard starts in about 2-3 seconds.
I have low CPU utilization and memory utilized around 50% total.

What is your data stored? Mine is 4TB, and it takes 25 seconds to open.

So after 600 seconds i stopped counting. Dashoboard still not open :frowning:
Should be now about 400GB not more.
I also recreated already the docker container with same configs and data mount.

I see only:
docker exec -it storagenode_1 /app/dashboard.sh
2019-07-11T17:36:37.586Z INFO Configuration loaded from: /app/config/config.yaml
2019-07-11T17:36:37.612Z INFO Node ID: XXXXXXXXXXXXX

kind regards
Michael

Mine took quite a bit of time to boot up, I have about 250GB.

Are you running it on a Pi?

1 Like

Please, show your logs
docker logs --tail 10 storagenode

So a little background info before the log:

  • StorJ SN is running in a docker container.
  • 2 x Intel® Xeon® CPU E5645 + HT
  • 8 GB RAM

Docker itself on HDD Ceph Cluster:
Write: 1073741824 Bytes (1,1 GB, 1,0 GiB) kopiert, 20,0311 s, 53,6 MB/s
Read (without cache): 1073741824 Bytes (1,1 GB, 1,0 GiB) kopiert, 16,7692 s, 64,0 MB/s

Data mount for storj via NFS on HDD:
Write: 1073741824 Bytes (1,1 GB, 1,0 GiB) kopiert, 13,0937 s, 82,0 MB/s
Read (without cache): 1073741824 Bytes (1,1 GB, 1,0 GiB) kopiert, 9,37932 s, 114 MB/s

Finally after min. 5 minutes:
Storage Node Dashboard ( Node Version: v0.14.11 )

======================

ID XXXXXXXXXXXXXXXXXXXXXXXXXX
Last Contact 0s ago
Uptime 20h33m48s

               Available         Used       Egress      Ingress
 Bandwidth        9.6 TB     416.3 GB     164.3 GB     252.0 GB (since Jul 1)
      Disk        8.2 TB     284.3 GB

Bootstrap bootstrap.storj.io:8888
Internal 127.0.0.1:7778
External xxx.xxx.xxx:28967

Neighborhood Size 139

And the logfile:

2019-07-11T19:20:39.184Z        ERROR   piecestore protocol: rpc error: code = Canceled desc = context canceled
        storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:243
        storj.io/storj/pkg/pb._Piecestore_Upload_Handler:602
        storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
        google.golang.org/grpc.(*Server).processStreamingRPC:1209
        google.golang.org/grpc.(*Server).handleStream:1282
        google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-11T19:20:56.664Z        INFO    piecestore      download started        {"Piece ID": "PZW3XJ76QAOBDIN6W565YCJCTBWOK5NKL3WELXHM6TO7UW4MHUAA", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET"}
2019-07-11T19:21:40.656Z        INFO    piecestore      upload failed   {"Piece ID": "RQIP5NJJNMNBTM7PO5FINN4XEAGDWG5COCYFI6MEFYXWOTAZHRSQ", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "PUT", "error": "piecestore protocol: rpc error: code = Canceled desc = context canceled", "errorVerbose": "piecestore protocol: rpc error: code = Canceled desc = context canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:243\n\tstorj.io/storj/pkg/pb._Piecestore_Upload_Handler:602\n\tstorj.io/storj/pkg/server.logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1209\n\tgoogle.golang.org/grpc.(*Server).handleStream:1282\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:717"}
2019-07-11T19:21:40.656Z        ERROR   piecestore protocol: rpc error: code = Canceled desc = context canceled
        storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:243
        storj.io/storj/pkg/pb._Piecestore_Upload_Handler:602
        storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
        google.golang.org/grpc.(*Server).processStreamingRPC:1209
        google.golang.org/grpc.(*Server).handleStream:1282
        google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-11T19:21:40.698Z        INFO    piecestore      upload failed   {"Piece ID": "HSW52FHE6AUES7FRMGB3ISY5Q47CSKZA5KGBJVESAHYPOFB4BTSA", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "PUT", "error": "piecestore protocol: rpc error: code = Canceled desc = context canceled", "errorVerbose": "piecestore protocol: rpc error: code = Canceled desc = context canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:243\n\tstorj.io/storj/pkg/pb._Piecestore_Upload_Handler:602\n\tstorj.io/storj/pkg/server.logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1209\n\tgoogle.golang.org/grpc.(*Server).handleStream:1282\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:717"}
2019-07-11T19:21:40.698Z        ERROR   piecestore protocol: rpc error: code = Canceled desc = context canceled
        storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:243
        storj.io/storj/pkg/pb._Piecestore_Upload_Handler:602
        storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
        google.golang.org/grpc.(*Server).processStreamingRPC:1209
        google.golang.org/grpc.(*Server).handleStream:1282
        google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-11T19:21:53.042Z        INFO    piecestore      upload started  {"Piece ID": "QKWKMPE7S5TQ6LDKROQRL7LTVERG2F5T52F6QHJ74MAUGV7SUHCA", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "PUT"}
2019-07-11T19:22:18.684Z        INFO    piecestore      upload started  {"Piece ID": "F6SITCPVHD2IQT7RXSGKG2EJMSBUKWL5V5EHVJ5T5DJNVU6HFBKA", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "PUT"}
2019-07-11T19:22:30.479Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}
2019-07-11T19:22:32.621Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}
2019-07-11T19:22:35.429Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}
2019-07-11T19:22:50.468Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}
2019-07-11T19:23:00.600Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}
2019-07-11T19:23:03.053Z        ERROR   piecestore      upload rejected, too many requests      {"live requests": 7}

@DJSnoopy try to increase the storage2.max-concurrent-requests to the higher value than the default 7. Then check, how fast your dashboard will be loaded after the change.

Add these lines at the end of the config.yaml. You have to stop the node, make the change, then start the node again.

# Maximum number of simultaneous transfers
storage2.max-concurrent-requests: 7

This issue seems fixed for me now that the RAM leak was fixed in v 0.14.13. It will start under 10seconds.

I can also confirm that it starts now in under 1 second for me since the last update.

My bench was at 4.2TB, now that i have zero, it opens in milliseconds! heh.

Sorry, is this no longer available for nodes on docker?
In log I see

Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage2.max-concurrent-requests”}

No, this is storagenode-updater complains on keys in the config.yaml, which it doesn’t understand, because they both (storagenode and storagenode-updater) shares the same config. But since it’s INFO, you may safely ignore these complains.

1 Like