Storj Terminal Dashboard

Hi everyone.

I’m a node operator myself and am currently hosting 5 nodes. I found it cumbersome to always have to sift through the logs of each node in order to look up if any and what filewalkers are currently running. This is why I wrote a little tool summarize all that in a simple way.

Storj Terminal Dashboard

It is pretty simple to use, as you only need to set up the paths to the log and the database of each node in the .json file. You can specify any number of nodes. After that you just have to run the script without any arguments.

Note, that my script only shows the states of each filewalker up to the most recent restart of the node.

I am open to constructive criticism and any suggestions for improvement. Feel free to fork my repo and further improve on it.

Credits to RenΓ© Smeekes and his storj-earnings script, as my terminal dashboard also utilizes his script to pull financial and disk storage data.

17 Likes

Nice :slight_smile:

Do we have to shut down the nodes before running the script? Because it’s accessing the databases…

No, you don’t have to if you are running Linux. Technically my script is not even touching the database. It is just parsing the logs.

The databases are only read by René’s script to pull finanical and disk storage data. The limitations of his script still apply, tho. If I recall correctly, you should not use his script without shutting down the nodes and copying the databases, if you are running Windows.

I’ll check that and update the README of my dashboard accordingly.

2 Likes

@lukhuber Great work - thanks! :slight_smile:

Doesn’t seem to work?

Setup:

  • Ubuntu 22LTS
  • Python 3.10.12 (from default repos)
  • Node in docker, logging to flat files on host filesystem
  • Daily rotation of logs with logrotate w/gzip.
    (node.log - node.log.1.gz - node.log.2.gz…)

I get this output:

storj-tools# ./storj-dashboard.py
[ ~~ ] Reading log of 251 … 484.5/878.0 MB
…

β”Œβ”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                           β”‚β”‚                                           β”‚
β”‚ Current Total:    12.25 $ β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:  16.80 $ β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                           β”‚β”‚                                           β”‚
β”‚ Disk Used:       14.18 TB β”‚β”‚   SL  unknown     unknown     unknown     β”‚
β”‚ Unpaid Data:      1.17 TB β”‚β”‚  AP1  unknown     unknown     unknown     β”‚
β”‚                           β”‚β”‚  EU1  unknown     unknown     unknown     β”‚
β”‚ Report Deviation: 27.31 % β”‚β”‚  US1  unknown     unknown     unknown     β”‚
β”‚                           β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══════════════════════════ All Nodes - Summary ══════════════════════════

β”Œβ”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                           β”‚β”‚                                           β”‚
β”‚ Estimated total: $  16.80 β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚                           β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚                                           β”‚
                             β”‚   SL  0 running   0 running   0 running   β”‚
                             β”‚  AP1  0 running   0 running   0 running   β”‚
                             β”‚  EU1  0 running   0 running   0 running   β”‚
                             β”‚  US1  0 running   0 running   0 running   β”‚
                             β”‚                                           β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

However, I know this node was running trash earlier today from it’s logs:

2024-06-22T22:28:48Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-06-23T03:45:06Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "5h16m17.694585625s"}
2024-06-23T03:45:06Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-23T03:45:06Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "254.999Β΅s"}
2024-06-23T03:45:06Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-23T03:46:07Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "1m1.016199671s"}
2024-06-23T03:46:07Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-23T03:46:07Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "16.54863ms"}

A few questions:
Any requirements for logging in storagenode config?

It could make sense to be able to specify more than the main log, so it could ie. look back main log + x gz’ed ones.

What is Report Deviation?

The dashboard does not display when trash is emptied as this usually is a rather short process.
It just shows if either the garbage-collector-filewalker, trash-cleanup-filewalker or used-space-filewalker is running.

Maybe your logs got rotated after your all of your filewalkers ran, hence why no states are displayed in the dashboard.

To your questions:
No, you don’t need any special config for your storagenode, in order for this dashboard to run. I recommend tho, to set the logging level to WARN. For your nodes in docker this would mean to include following command in the docker-compose: --log.custom-level=piecestore=WARN

Yes it is planned to be able to also include rotated logs for each respective node. Currently, this is not supported tho. It is only possible to specify one log per node.

Report Deviation is displayed, if René’s storj_earnings script is issuing a warning, that the reported disk storage data from the satellites is not accurate. This means that also the data in β€œNODE MAIN STATS” in the dashboard is not accurate.

1 Like

Thanks for the answers! :+1:

Forced a restart and filewalker run with lazy and scan at startup, now it shows the expected output:

β”Œβ”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                           β”‚β”‚                                           β”‚
β”‚ Current Total:    12.41 $ β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:  16.81 $ β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                           β”‚β”‚                                           β”‚
β”‚ Disk Used:       14.18 TB β”‚β”‚   SL  unknown     0d 2h ago   running     β”‚
β”‚ Unpaid Data:      1.19 TB β”‚β”‚  AP1  unknown     0d 2h ago   unknown     β”‚
β”‚                           β”‚β”‚  EU1  unknown     0d 2h ago   unknown     β”‚
β”‚ Report Deviation: 27.01 % β”‚β”‚  US1  unknown     0d 2h ago   unknown     β”‚
β”‚                           β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══════════════════════════ All Nodes - Summary ══════════════════════════

β”Œβ”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                           β”‚β”‚                                           β”‚
β”‚ Estimated total: $  16.81 β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚                           β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚                                           β”‚
                             β”‚   SL  0 running   0 running   1 running   β”‚
                             β”‚  AP1  0 running   0 running   0 running   β”‚
                             β”‚  EU1  0 running   0 running   0 running   β”‚
                             β”‚  US1  0 running   0 running   0 running   β”‚
                             β”‚                                           β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

However - timezone is off. I’m in CET and logs are in Z, so it reports 2 hours off.

Unfortunately, I have no control over the timezones the user has set up. You would have to set the timezone for your node accordingly to show accurate times.

1 Like

What db files does your script use specifically?

cat /etc/timezone
date +%z

Issues only occur if you use it over a network protocol like NFS or SMB, due to locking being unreliable on SQLite DB’s in such scenarios. This includes docker nodes on Windows as those implementations use SMB to share the storage. Stopping your node is not necessary if you run everything local.

1 Like

Will the Filewalker part will work, when limiting the logfile size (I have mine limited to 10MB)? Is it compatible with hosting the nodes in docker?

Perhaps not, unless you also reduced the log level to warn, and this log level should be enough for this dashboard, am I right @lukhuber ?

Better get rid of the exception?

Always getting this: ValueError: time data β€˜{β€œlog”:"2024-06-25T’ does not match format β€˜%Y-%m-%dT%H:%M:%S’

Don’t know how to solve this problem all my logs showing the T…(using docker and Linux)

I got all running at some time but still don’t know whats going on. Right know it’s not working

This happens, if there is an error with the earnings script. Do you perhaps have an error with the path specification in storj-dashboard.json

But you are right, I should make this error clear in the exception.

Yes, it definitely does. I have reduced my log level to warn also.

Yes it should work. Please note, however, that the information in the dashboard is potentially very inaccurate if you do not allow larger logs.

That is very strange. Your logs are only showing the date of each entry. There should also be a time associated with each entry. What log level are you using?

I am using loglevel: INFO

I guess it fails here:
self.earnings = subprocess.run(['python3', earningsCalculator, dbPath], capture_output=True, text=True)

I don’t have python3 in my PATH, running the script via "C:\Users\<...>\AppData\Local\Programs\Python\Python312\python.exe" D:\storj-scripts\storj-dashboard.py in a .bat file.