Log-Exporter for Prometheus with Grafana Dashboard

I managed to finish my work on the log-exporter!
If you followed my How-To monitor all nodes in your lan then adding this exporter will be easy for you.

Installation using Docker

sudo docker run -d --restart unless-stopped --user "1000:1000" \
    -p 9144:9144 \
    --mount type=bind,source="<path_to_your_logfile_directory>",destination=/app/logs \
    --name storj-log-exporter \
    kevinkk525/storj-log-exporter:latest -config /app/config.yml

Change the container name to your liking, insert the path to your logfiles instead of <ā€¦>. The logfile needs to have the name ā€œnode.logā€ but it can not be in the path in the docker run command! E.g. you need a path like /mnt/storj/ if your logfile is /mnt/storj/node.log (because binding the logfile into the container directly will cause problems with inotify detecting changes in the file).
If you run multiple exporter, make sure to change the port, e.g. -p 9145:9144 etc.
The user ā€œ1000:1000ā€ should be fine unless your logfiles canā€™t be read by that user.

Now you can check the output at http://<node_ip>:9144/metrics

Configure in prometheus.yml

If you followed my How-To monitor all nodes in your lan on the storj forum, you should already have a job in prometheus.yml looking like this:

  - job_name: storagenode1
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
      - targets: ["storj-exporter1:9651"]
        labels:
          instance: "node1"

Add the following to your existing job for each node:

  - job_name: storagenode1
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
      - targets: ["storj-exporter1:9651","storj-log-exporter1:9144"]
        labels:
          instance: "node1"

Then restart prometheus. To see if it worked, check on the prometheus page the targets.

Add dashboard to Grafana

Add the dashboard from the file dashboard_log_exporter.json in the same way as described in my How-To monitor all nodes in your lan

Update: New Dashboard

I created a new dashboard that also needs the storj-exporter from @greener Prometheus Storj-Exporter and can be found in his dashboard repository as well as mine. You can download it from here: https://raw.githubusercontent.com/kevinkk525/storj-log-exporter/main/dashboard_log_exporter.json

The new dashboard has many metrics and many options. Choose what you want to see. You can (un)hide the sections you want to see. Coloring has been standardized, ingress/upload is green, egress/download is blue, storage(io) is purple-ish. Also egress is always shown negative in graphs and ingress positive (netdata does it the same way, the old exporter from @greener does it the other way around, so donā€™t be surprised).

Sections:

  • Combined Summary with all important information. If you only look at these, you wonā€™t miss anything important. It will warn you about (among other things) new error messages or the minimum uptime/audit/suspension score of any node on any satellite, so you can quickly see if one node is having a problem.
  • Node overview showing all nodes in a boomtable with the most important information. Get a quick overlook over all nodes and possible problems (audit score dropping etc).
  • Different form of NetIO graphs: Simple (maybe you like it because it has less colors), By node (multiple colors, very distinct node overview), by satellite (so you can finally see where all that traffic comes from or which satellite deletes lots of data)
  • Successrates
  • Piece information (see what average pieces size you get/send, repair and usage difference)
  • Detailed stats from each node

Result

Combined-Exporter-Dashboard:


Notes

The Error Count excludes falsely classified errors (e.g. shutdown of runner, download/upload failed due to client side errors, graceful exit errors if you GEā€™ed on stefan-benten).

Restarts of the exporter container can cause some ā€œstrangeā€ values on the dashboard like upload/download successrates of >100%. This is simply due to the exporter having missed a few lines of ā€œupload/download startedā€ while the container was restarting. Thereā€™s nothing I can do about that, those line are just lost (there is an option to always read the whole file but itā€™ll cause other problems with metrics so I canā€™t use that option). The successrate will go back to normal after a while.

Let me know if you miss a metric or if something is wrong.

Future

I will publish a dashboard that combines the metrics from the storj-exporter and this storj-log-exporter but it might take a while.

8 Likes

so just tried setting this upā€¦one note for your start command. missing a ā€œā€ at end of fourth line.

But I canā€™t seem to get this to work start properly on the node I tried it on. I get through the container start command, and it looks like it starts okay, but then in my docker ps -a list, it just shows the storj-log-exporter image is just sitting in ā€œrestartingā€ state.

hereā€™s the docker start command I tried using:

sudo docker run -d --restart unless-stopped --user ā€œ1000:1000ā€
-p 9144:9144
ā€“mount type=bind,source="/mnt/Storj/storagenode/node.log",destination=/app/logs
ā€“name storj-log-exporter
storj-log-exporter -config /app/config.yml

also tried running it without the ā€œā€“user"1000:1000ā€ just in case that was the issue, no luck.

So previously to trying this, I didnā€™t have the docker logs redirected to a file, so I set that up, and see that itā€™s being populated, so any thoughts?

You need the path to your log directory, not the logfile itself! so: /mnt/Storj/storagenode

I would also set up some logrotation now, so you wonā€™t end up with gigabytes of a logfile.

I donā€™t see where I would be missing that?

I ran my command:
sudo docker run -d --restart unless-stopped --user ā€œ1000:1000ā€
-p 9144:9144
ā€“mount type=bind,source="/mnt/StorjHDD/",destination=/app/logs
ā€“name storj-log-exporter
storj-log-exporter -config /app/config.yml

but it gives me the following error:

Anyone has an idea of what I did wrong ?

woops, forgot a ā€˜\ā€™ in the command, sorryā€¦ Gues thatā€™s what @fmoledina meant but his backslash got removed by the markdown interpreter too :smiley:

sudo docker run -d --restart unless-stopped --user "1000:1000" \
    -p 9144:9144 \
    --mount type=bind,source="<path_to_your_logfiles>",destination=/app/logs \
    --name storj-log-exporter \
    storj-log-exporter -config /app/config.yml

by the way: it might be easier to paste it into a file log_exporter.sh, make it executable, change the values and run it from the file instead of pasting all the lines every time.

Nice thanks it works now !
Also the link to the dashboards json doesnā€™t seem to work for me.

Glad you got it working!

Oh yeah that link was a local github linkā€¦ I just pasted that part :smiley: fixed it now.

1 Like

Works like a charm !
Man Iā€™ll never thank you enough for all the work you put in, monitoring my 3 nodes is soooo much easier now.

3 Likes

Awesome!
Ah well, I just share the work I would have done anyways :smiley: Makes monitoring easier for me too. And @greener did a lot more work with his storj-exporter exposing the storagenode API. His dashboard is a great help.

2 Likes

hi @kevink

Iā€™m getting a that the app/logs is not a dir.

Starting server on http://9c4177665b50:9144/
error reading log lines: "/app/logs" is not a directory

my config:

sudo docker run -d --restart unless-stopped --user "1000:1000" \
    -p 9144:9144 \
    --mount type=bind,source="/storj-pool/node.log",destination=/app/logs \
    --name storj-log-exporter \
    storj-log-exporter -config=/app/config.yml

You need the path to your log directory, not the logfile itself! so: /storj-pool

i feel dumb, thanks :smile:

The exporter gave me a fatal error:

I looked through the logs and found this:

2021-01-24T04:04:36.673Z        INFO    Got a signal from the OS: "terminated"
2021-01-24T04:04:36.682Z        ERROR   retain  retain pieces failed    {"error": "retain: context canceled", "errorVerbose": "retain: context canceled\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:408\n\tstorj.io/storj/storagenode/retain.(*Service).Run.f$
2021-01-24T04:04:36.683Z        INFO    piecestore      upload canceled {"Piece ID": "FYLXABPZ44ZETNYGLZR5N262WHWV5C3BJ5MNSFJRL23N23YIPP2A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT"}
2021-01-24T04:04:36.684Z        INFO    piecestore      downloaded      {"Piece ID": "2REYKZJXGHTOROAKGCAEHR3MJ6Y2QVFKC6UFICXESGHM2UDXLI4Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2021-01-24T04:04:36.686Z        INFO    piecestore      downloaded      {"Piece ID": "JTD7AAOWPHTQJ7MELZMHR7RRZYTJKQYNG2D56HGASLDD47IJPTDA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-01-24T04:04:36.689Z        INFO    piecestore      upload canceled {"Piece ID": "M56FI6VMVYSJ3PXTHFAX4B3M2LFHZDTSW25TV4NZ2BD5UCUTHNYA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT_REPAIR"}
2021-01-24T04:04:37.189Z        ERROR   servers unexpected shutdown of a runner {"name": "debug", "error": "debug: http: Server closed", "errorVerbose": "debug: http: Server closed\n\tstorj.io/private/debug.(*Server).Run.func2:108\n\tgolang.org/x/sync/errgroup.(*Group).$
2021-01-24T04:04:38.119Z        FATAL   Unrecoverable error     {"error": "debug: http: Server closed", "errorVerbose": "debug: http: Server closed\n\tstorj.io/private/debug.(*Server).Run.func2:108\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-01-24T04:04:54.340Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2021-01-24T04:04:54.388Z        DEBUG   Anonymized tracing disabled
2021-01-24T04:04:54.394Z        INFO    Operator email  {"Address": "gaby29999@gmail.com"}
2021-01-24T04:04:54.395Z        INFO    Operator wallet {"Address": "0xfd721DBF85DDB25F8fcdeaE1Cec5903A496dBBd4"}
2021-01-24T04:04:54.681Z        DEBUG   Version info    {"Version": "1.20.2", "Commit Hash": "aca7507f4df9b9a0e76d26a93402f414fbd960c6", "Build Timestamp": "2021-01-11 16:29:11 +0000 UTC", "Release Build": true}
2021-01-24T04:04:55.327Z        DEBUG   version Allowed minimum version from control server.    {"Minimum Version": "1.13.0"}
2021-01-24T04:04:55.327Z        DEBUG   version Running on allowed version.     {"Version": "1.20.2"}
2021-01-24T04:04:55.329Z        INFO    Telemetry enabled       {"instance ID": "12UKboTANvWBR5ByqzpottxZCbDrSb75v7J1g9sWxLtdwzzwkWT"}
2021-01-24T04:04:55.413Z        INFO    db.migration.47 Add audit_history field to reputation db
2021-01-24T04:04:55.478Z        INFO    db.migration    Database Version        {"version": 47}
2021-01-24T04:04:55.494Z        DEBUG   db      Database version is up to date  {"version": 47}
2021-01-24T04:04:56.733Z        DEBUG   trust   Fetched URLs from source; updating cache        {"source": "https://tardigrade.io/trusted-satellites", "count": 6}
2021-01-24T04:04:56.812Z        DEBUG   trust   Satellite is trusted    {"id": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2021-01-24T04:04:56.812Z        DEBUG   trust   Satellite is trusted    {"id": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2021-01-24T04:04:56.812Z        DEBUG   trust   Satellite is trusted    {"id": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2021-01-24T04:04:56.812Z        DEBUG   trust   Satellite is trusted    {"id": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2021-01-24T04:04:56.813Z        DEBUG   trust   Satellite is trusted    {"id": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2021-01-24T04:04:56.813Z        DEBUG   trust   Satellite is trusted    {"id": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2021-01-24T04:04:56.813Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2021-01-24T04:04:57.674Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2021-01-24T04:04:57.674Z        DEBUG   servers started {"items": ["debug", "server"]}
2021-01-24T04:04:57.675Z        DEBUG   services        started {"items": ["version", "trust", "contact:chore", "PieceDeleter", "pieces:trash", "piecestore:cache", "piecestore:monitor", "retain", "orders", "nodestats:cache", "console:endpoint", "gracefulexit:blobscleane$
2021-01-24T04:04:57.675Z        INFO    bandwidth       Performing bandwidth usage rollups
2021-01-24T04:04:57.675Z        INFO    Node 12UKboTANvWBR5ByqzpottxZCbDrSb75v7J1g9sWxLtdwzzwkWT started
2021-01-24T04:04:57.675Z        INFO    Public server started on [::]:28967
2021-01-24T04:04:57.675Z        INFO    Private server started on 127.0.0.1:7778
2021-01-24T04:04:57.675Z        DEBUG   pieces:trash    starting to empty trash
2021-01-24T04:04:57.678Z        INFO    trust   Scheduling next refresh {"after": "5h12m11.889607509s"}
2021-01-24T04:04:57.678Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2021-01-24T04:04:57.679Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2021-01-24T04:04:57.680Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2021-01-24T04:04:57.680Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2021-01-24T04:04:57.680Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2021-01-24T04:04:57.680Z        DEBUG   contact:chore   Starting cycle  {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2021-01-24T04:04:57.720Z        WARN    console:service unable to get Satellite URL     {"Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "error": "storage node dashboard service error: trust: satellite \"118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkm$
2021-01-24T04:04:57.862Z        DEBUG   contact:endpoint        pinged  {"by": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "srcAddr": "35.187.72.139:41658"}
2021-01-24T04:04:57.924Z        DEBUG   version Allowed minimum version from control server.    {"Minimum Version": "1.13.0"}
2021-01-24T04:04:57.924Z        DEBUG   version Running on allowed version.     {"Version": "1.20.2"}
2021-01-24T04:04:58.029Z        DEBUG   contact:endpoint        pinged  {"by": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "srcAddr": "35.228.222.168:40716"}
2021-01-24T04:04:58.249Z        DEBUG   contact:endpoint        pinged  {"by": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "srcAddr": "34.86.240.236:43910"}
2021-01-24T04:04:58.386Z        DEBUG   contact:endpoint        pinged  {"by": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "srcAddr": "35.232.57.8:41986"}
2021-01-24T04:04:58.600Z        DEBUG   contact:endpoint        pinged  {"by": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "srcAddr": "35.236.98.215:36900"}
2021-01-24T04:04:59.119Z        DEBUG   contact:endpoint        pinged  {"by": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "srcAddr": "35.236.157.135:59816"}
2021-01-24T04:05:00.281Z        INFO    piecestore      download started        {"Piece ID": "SYGMKJSY74W6ZQLPOA4I3XYHJZJF6DZSGVPE7GG7SSCJ6YITRDAA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_AUDIT"}
2021-01-24T04:05:00.619Z        INFO    piecestore      download started        {"Piece ID": "HUDLGBL33TE43N4DJOAGGQLF54OW7IEFDZPRCJZZV4LYE3DJXK4A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET"}
2021-01-24T04:05:00.661Z        INFO    piecestore      download started        {"Piece ID": "PXL25K22S6YVDJGNEB7YVPX5JJ75DNPHT4TDJWWP3VOMI2KGDN2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2021-01-24T04:05:00.813Z        INFO    piecestore      downloaded      {"Piece ID": "SYGMKJSY74W6ZQLPOA4I3XYHJZJF6DZSGVPE7GG7SSCJ6YITRDAA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_AUDIT"}
2021-01-24T04:05:01.403Z        INFO    piecestore      download started        {"Piece ID": "3UKBRO44K2W32DWCMMLUG53ICXQTLXHEZRLKMWJBZKXYAQA7UWDA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-01-24T04:05:01.434Z        INFO    piecestore      downloaded      {"Piece ID": "PXL25K22S6YVDJGNEB7YVPX5JJ75DNPHT4TDJWWP3VOMI2KGDN2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2021-01-24T04:05:01.754Z        INFO    piecestore      download started        {"Piece ID": "F6VWPPGGJCDVBOA6VRCNBD6FFAJV2SDTBXNCDT2JYRZNJCF7YYFA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2021-01-24T04:05:01.972Z        DEBUG   orders  sending
2021-01-24T04:05:01.997Z        DEBUG   orders  no orders to send
2021-01-24T04:05:03.035Z        INFO    piecestore      download started        {"Piece ID": "RAWTM6YVYOTAL4NOYAEXTZVGOMVFIJIUIT7QKHQA5ZO36BXBNO2Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2021-01-24T04:05:03.247Z        INFO    piecestore      downloaded      {"Piece ID": "3UKBRO44K2W32DWCMMLUG53ICXQTLXHEZRLKMWJBZKXYAQA7UWDA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2021-01-24T04:05:03.442Z        INFO    piecestore      download started        {"Piece ID": "JRAAUMJUZVJXLF2FHUNZNDG6LIL55PSYPZJ5YIIGQ2OHEEQ7LOAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2021-01-24T04:05:03.671Z        INFO    piecestore      downloaded      {"Piece ID": "JRAAUMJUZVJXLF2FHUNZNDG6LIL55PSYPZJ5YIIGQ2OHEEQ7LOAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2021-01-24T04:05:03.963Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      sending {"count": 700}
2021-01-24T04:05:03.964Z        INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      sending {"count": 387}
2021-01-24T04:05:03.964Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       sending {"count": 163}
2021-01-24T04:05:03.964Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      sending {"count": 304}
2021-01-24T04:05:03.964Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      sending {"count": 490}
2021-01-24T04:05:04.326Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      finished
2021-01-24T04:05:04.682Z        INFO    orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB      finished
2021-01-24T04:05:04.988Z        INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      finished
2021-01-24T04:05:05.130Z        INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       finished
2021-01-24T04:05:05.437Z        INFO    piecestore      downloaded      {"Piece ID": "F6VWPPGGJCDVBOA6VRCNBD6FFAJV2SDTBXNCDT2JYRZNJCF7YYFA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}

I donā€™t know why the system sent a terminate to the node but it seems like it restarted fine and has now been online for 5h operating normally from what I can see.
However the grafana dashboard says that the average piece chart is now ā€œundefinedā€.
Anyone got an idea of what happened ?

Maybe your node got updated?
Apparently it is running the latest version (1.20.2) now.

1 Like

I thought about that, usually I update my nodes before the watchtower gets a chance to do it because Iā€™m impatient.
Did V1.20.2 just get rolled out to docker nodes ? I thought I already updated that node a couple of days ago but I might be wrong.
Anyway I tried to restart both prometheus and the log exporter but the dashboard still gives me the same error, It looks like the piece size is infinity now.

The image got pushed 18 hours ago so this was likely an update. And storjlabs didnā€™t quite get it right stopping the node without throwing a fatal errorā€¦ So while this error is within your selected time period, itā€™ll stay in the dashboard.

The average piece size is a bit unreliableā€¦ not sure why exactly but it doesnā€™t like restarting the exporter. After a few hours (and possibly once the restart is out of your time period), itā€™ll work again.

1 Like

I tried changing the time period but it still doesnā€™t show the average piece size. It looks like it doesnā€™t detect any ingress.


Thatā€™s probably why it thinks the pieces are of infinite size. Iā€™ll keep an eye on it to see if anything changes in the next few hours/days.

Hmm I see what happenedā€¦ they changed the log format of upload successful to include the size of the pieceā€¦ Now I have to change my exporter. I can also extend it with that information even though I donā€™t think itā€™s really helpful. Doing an average over this new information should be just the same as doing avg(diff(ingress)/sum(pieces_uploaded)).

Anyways, will need to fix the exporter first and push an update before the ingress and the average piece size will work again. Might as well make my repository auto build on docker hub thenā€¦ Maybe Iā€™ll get it done today, weā€™ll see.

1 Like

Well, I pushed the update but a multi-archi build doesnā€™t work yet. So if you want it to work, youā€™ll have to download it from github again and build it again.

1 Like

Will do that once my exams are over and Iā€™ll let you know, probably Wednesday night or Thursday.

1 Like