Telemetry - how and why to enable it

How to enable telemetry? I think I saw something in the config.
What data is sent to you?
Are there any downsides for us, sno, if we keep it enabled all the time?
Does it consumes resources? Like CPU, bandwith, storage, ram?


I have a feeling that we include too many different topics in this thread, hope nobody has concerns about it :wink:

Let me try to answer quickly (which is usually hard for me :wink: )

  • As far as I remember, you are asked about the telemetry during first usage of binary
  • If it’s enabled, you should see Telemetry enabled on INFO level during storagenode startup
  • can be disabled with setting zero/empty string to metrics.interval / metrics.addr (either is good)
  • data is exactly the same what you can see in your grafana/prometheus setup (check /metrics on debug port). Boring Prometheus data, without specific information about the running environment.
  • resource usage is negligible, data will be collected anyway, you can enable/disable the sharing of the data
  • most of the data is already known my Satellite (half of the data is sent by satellite to storagenode, and exposed as metrics there, so redundant)
  • TBH the Storagenode (!) telemetry data is not really used today on our side (we heavily use the telemetry data of Satellite components, but not Storagenodes). I just had the idea to start using it for debugging this problem.
  • Therefore I recommend to either enable it or keep it enabled. It can help identifying performance problems, and creating better Storagenode for the operators. This is how can you vote to problems…

Maybe an Admin can split this to a new thread. “Telemetry - how and why to enable it”

What command needs to be added to the config.yml to enable “Storagenode (!) telemetry” for Windows node?

metrics.interval / metrics.addr should be either commented out or having the default values.

In the config of my storagenode, they are commented out, which means I use the default settings:

# address(es) to send telemetry to (comma-separated)
# metrics.addr:

# how frequently to send up telemetry. Ignored for certain applications.
# metrics.interval: 1m0s

Using defaults have the same affects:

metrics.interval: 1m0s

Please note that metrics.interval matters only if it’s 0 (=telemetry is turned off). Any other value will be replaced with 30 min. So it’s not 1min, even if it’s seems to be, but 30min. Just to avoid receiving too much data.


So it’s enabled by default and we don’t have to do anything. :sunglasses:

Oioi, a standalone thread for this! Great - thank you for having it broken up mod team.

Good post.

metrics.interval it’s 0 (=telemetry is turned off)

You should realy think twice before disabling telemetry, because:

  • it sends “private” data that satellites and Storj already have and know about you; they have the email you provided, the node ID, the wallet, the WAN IP. So if these are included in telemetry or not, dosen’t matter at all.
  • it helps Storj to diagnose nodes, catch bugs, impruve code, satellites etc., making the Storj network more reliable, more attractive to clients, more stable for SNO and more rewarding.
  • it is sent once every 30min and consumes insignifiant resources.
    So instead of asking how to disable it, you should ask how to enable it.

I usualy enable telemetry for software that is important for me to work best, without bugs, and to be impruved constantly, like the antivirus, because it’s a win win. Devs provide a better solution satifying the clients demand and clients benefit from more reliable software.
Being worried that the telemetry deanonimises you, gets your private data or tracks you in software installed on your computer, with malintent is just… bogus. In many cases you payed for that software or provided some credentials to use it, so they already have your data. Ou, they watch you how you use their software? What buttons you click, what options you use or not? Verry good! Let them know what is important to you and what not, and in the next version maybe they remove bloat and impruve the useful stuff.

1 Like

Email/wallet or other personal data are not included. As I wrote, check the /metrics endpoint if you are interested about the content. Here is a part from my Storagenode:

pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="count"} 2950
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="sum"} 54.343577505
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="min"} 0.00230586
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="max"} 0.034277389
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="rmin"} 0.006287784
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="ravg"} 0.018090316
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="r10"} 0.011908962
pieces_writer_hash{scope="storj_io_storj_storagenode_pieces",size="_1m",field="r50"} 0.017689208
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="r90"} 0.023635166
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="r99"} 0.027631438
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="rmax"} 0.028730884
pieces_writer_hash{size="_1m",scope="storj_io_storj_storagenode_pieces",field="recent"} 0.019928014
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="count"} 5712
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="sum"} 45.471137137
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="min"} 0.002610308
pieces_writer_hash{scope="storj_io_storj_storagenode_pieces",size="_500k",field="max"} 0.028961131
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="rmin"} 0.003636408
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="ravg"} 0.008050032
pieces_writer_hash{scope="storj_io_storj_storagenode_pieces",size="_500k",field="r10"} 0.004668326
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="r50"} 0.00812522
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="r90"} 0.01048025
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="r99"} 0.017663014
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="rmax"} 0.018578828
pieces_writer_hash{size="_500k",scope="storj_io_storj_storagenode_pieces",field="recent"} 0.007019448
pieces_writer_hash{scope="storj_io_storj_storagenode_pieces",size="_2m",field="count"} 12881
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="count"} 1008
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="sum"} 1.07097344e+08
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="min"} 65536
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="max"} 2.29376e+06
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="rmin"} 65536
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="ravg"} 90240
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="r10"} 65536
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="r50"} 65536
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="r90"} 131072
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="r99"} 444743
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="rmax"} 532480
upload_cancel_size_bytes{scope="storj_io_storj_storagenode_piecestore",field="recent"} 65536

I would recommend sharing telemetry data, if you expect to get better and more effective Storagenode software (less IO overhead, faster GC / space calculation). Storagenode will be optimized based on telemetry data.


You may try to provide it like --metrics.interval=0 in the command line interface (or as an option metrics.interval: 0 in the config.yaml file)

Opt-out instead of opt-in is always a rude move.

Don’t be rude!

1 Like

Strangely when I set it to 0s it is still showing that the telemetry is enabled:

Actually right now I don’t know how to turn it off.

Did you restart your node after changes to config.yaml ? Also try using just 0 rather than 0s.

1 Like

Tried also 0 and restarted, but it is still enabled.

Asked the team, how can you disable it.
But I think you may just clean the address in the option metrics.addr: "" or the command line argument --metrics.addr="".

Setting metrics.interval to 0 should definitely disable telemetry. The relevant code is at



So I’m not sure what is going wrong there. Could it be the wrong config.yaml?


Good catch. I think it’s unnecessary, and should be fixed.


Good catch!
So for all those years if someone used docker image there was hidden telemetry which SNO can’t even disable :smiley:
I don’t know, but we are in the cryptocurrency space where people are sensitive to the words telemetry and privacy.
Simply wow.
I wonder what next will be discovered in source code.

All transactions are directly visible in the blockchain. Not sure I understand this privacy concept.

1 Like