{
“Statuses”: null,
“Help”: “To access Storagenode services, please use DRPC protocol!”,
“AllHealthy”: true
}
So, I guess it’s working.
{
“Statuses”: null,
“Help”: “To access Storagenode services, please use DRPC protocol!”,
“AllHealthy”: true
}
So, I guess it’s working.
Yes, it’s. Does clear cookies can help to revive a dashboard?
There is something wrong with the dashboard since 1.116.
Did try with 1.117 and 1.118-rc, but it is all the same.
The dashboard is trying to load http://:14002/api/sno/ but it in most cases never finishes. In a rare cases it actually loads it takes like tens of seconds to respond:
When this is getting requested the storagenode logs are showing the following:
2024-11-22T01:43:05Z ERROR filewalker failed to store the last batch of prefixes {"Process": "storagenode", "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedForPieces:662\n\tstorj.io/storj/storagenode/console.(*Service).GetDashboardData:213\n\tstorj.io/storj/storagenode/console/consoleapi.(*StorageNode).StorageNode:45\n\tnet/http.HandlerFunc.ServeHTTP:2171\n\tgithub.com/gorilla/mux.(*Router).ServeHTTP:210\n\tnet/http.serverHandler.ServeHTTP:3142\n\tnet/http.(*conn).serve:2044"}
2024-11-22T01:43:05Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": false, "error": "filewalker: filewalker: context canceled; used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: filewalker: context canceled; used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:181\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedForPieces:662\n\tstorj.io/storj/storagenode/console.(*Service).GetDashboardData:213\n\tstorj.io/storj/storagenode/console/consoleapi.(*StorageNode).StorageNode:45\n\tnet/http.HandlerFunc.ServeHTTP:2171\n\tgithub.com/gorilla/mux.(*Router).ServeHTTP:210\n\tnet/http.serverHandler.ServeHTTP:3142\n\tnet/http.(*conn).serve:2044"}
2024-11-22T02:02:48Z INFO Anonymized tracing enabled {"Process": "storagenode"}
Release build
Version: v1.116.7
Build timestamp: 08 Nov 24 22:44 UTC
Git commit: a94c0b66fa910ccab9e6c2c12ad7d0ea46d54783
-rw-r--r-- 1 storj storj 24K Nov 22 01:40 used_space_per_prefix.db
-rw-r--r-- 1 storj storj 32K Nov 22 01:49 used_space_per_prefix.db-shm
-rw-r--r-- 1 storj storj 226K Nov 22 01:49 used_space_per_prefix.db-wal
No changes were done to the configuration or permissions, just the binary was replaced with a new one (linux amd64) and the node was restarted.
This is happening on all the nodes that were updated to 1.116.x on multiple systems.
Also, the Prometheus endpoint is broken as well, as the node is no longer reporting the disk space used and the trash size with storage2.monitor.dedicated-disk: true.
This is useful to monitor the delta between what is being reported as a used space by the satellites and what is actually being used by the nodes, so it would be nice to have this fixed as well.
Thank you.
I do not know, what do you mean by that. I built storj-up from main
and the node dashboard is loading for me.
For both option - the allocated and dedicated.
However, I would share this with the team. But it would be better, if you could submit a GitHub issue with steps to reproduce.
Kind of expected, because this feature is in an alpha state. Thanks for sharing!
I had this enabled:
storage2.monitor.dedicated-disk: true
Everything is ok now.