Storage allocated space parameter ignored in config - Linux/Docker

Storagenode ver. 1.96.6 - Linux, Docker.
I removed the parameter for alocated space in the docker run command, I commented the parameter in config with # (along with others, ofcourse) on the original positions, and I put all these at the end of the config, just to have them easy to find and edit:

storage.allocated-disk-space: 20.50 TB
log.level: fatal
storage2.monitor.verify-dir-readable-interval: 5m0s
storage2.monitor.verify-dir-readable-timeout: 2m0s
storage2.monitor.verify-dir-writable-interval: 5m0s
storage2.monitor.verify-dir-writable-timeout: 2m0s
pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: true

I stopped, removed and started the node with the new config, but the allocated space is ignored and shows the default one:

My run command:

docker run -d --restart unless-stopped \
	--stop-timeout 300 \
	--network host \
	-e WALLET="xxxxx" \
	-e EMAIL="xxxxx" \
	-e ADDRESS="xxxxx" \
	--mount type=bind,source="/volume2/Storj2/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume2/Storj2/",destination=/app/config \
	--log-driver json-file \
	--log-opt max-size=10m \
	--log-opt max-file=5 \
	--name storagenode2 storjlabs/storagenode:latest \
	--server.address="xxxxx" \
	--console.address=":xxxxx" \
	--server.private-address="127.0.0.1:xxxxx" \
	--debug.addr=":xxxxx"

My config file:

# how frequently bandwidth usage rollups are calculated
# bandwidth.interval: 1h0m0s

# how frequently expired pieces are collected
# collector.interval: 1h0m0s

# use color in user interface
# color: false

# server address of the api gateway and frontend app
console.address: 0.0.0.0:14002

# path to static resources
# console.static-dir: ""

# the public address of the node, useful for nodes behind NAT
contact.external-address: ""

# how frequently the node contact chore should run
# contact.interval: 1h0m0s

# protobuf serialized signed node tags in hex (base64) format
# contact.tags: ""

# Maximum Database Connection Lifetime, -1ns means the stdlib default
# db.conn_max_lifetime: 30m0s

# Maximum Amount of Idle Database connections, -1 means the stdlib default
# db.max_idle_conns: 1

# Maximum Amount of Open Database connections, -1 means the stdlib default
# db.max_open_conns: 5

# address to listen on for debug endpoints
# debug.addr: 127.0.0.1:0

# If set, a path to write a process trace SVG to
# debug.trace-out: ""

# open config in default editor
# edit-conf: false

# in-memory buffer for uploads
# filestore.write-buffer-size: 128.0 KiB

# how often to run the chore to check for satellites for the node to exit.
# graceful-exit.chore-interval: 1m0s

# the minimum acceptable bytes that an exiting node can transfer per second to the new node
# graceful-exit.min-bytes-per-second: 5.00 KB

# the minimum duration for downloading a piece from storage nodes before timing out
# graceful-exit.min-download-timeout: 2m0s

# number of concurrent transfers per graceful exit worker
# graceful-exit.num-concurrent-transfers: 5

# number of workers to handle satellite exits
# graceful-exit.num-workers: 4

# Enable additional details about the satellite connections via the HTTP healthcheck.
healthcheck.details: false

# Provide health endpoint (including suspension/audit failures) on main public port, but HTTP protocol.
healthcheck.enabled: true

# path to the certificate chain for this identity
identity.cert-path: identity/identity.cert

# path to the private key for this identity
identity.key-path: identity/identity.key

# if true, log function filename and line number
# log.caller: false

# if true, set logging to development mode
# log.development: false

# configures log encoding. can either be 'console', 'json', 'pretty', or 'gcloudlogging'.
# log.encoding: ""

# the minimum log level to log
# log.level: info

# can be stdout, stderr, or a filename
# log.output: stderr

# if true, log stack traces
# log.stack: false

# address(es) to send telemetry to (comma-separated)
# metrics.addr: collectora.storj.io:9000

# application name for telemetry identification. Ignored for certain applications.
# metrics.app: storagenode

# application suffix. Ignored for certain applications.
metrics.app-suffix: -alpha

# address(es) to send telemetry to (comma-separated)
# metrics.event-addr: eventkitd.datasci.storj.io:9002

# instance id prefix
# metrics.instance-prefix: ""

# how frequently to send up telemetry. Ignored for certain applications.
metrics.interval: 30m0s

# maximum duration to wait before requesting data
# nodestats.max-sleep: 5m0s

# how often to sync reputation
# nodestats.reputation-sync: 4h0m0s

# how often to sync storage
# nodestats.storage-sync: 12h0m0s

# operator email address
operator.email: ""

# operator wallet address
operator.wallet: ""

# operator wallet features
operator.wallet-features: ""

# move pieces to trash upon deletion. Warning: if set to false, you risk disqualification for failed audits if a satellite database is restored from backup.
# pieces.delete-to-trash: true

# run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority
# pieces.enable-lazy-filewalker: true

# file preallocated for uploading
# pieces.write-prealloc-size: 4.0 MiB

# whether or not preflight check for database is enabled.
# preflight.database-check: true

# whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly.
# preflight.local-time-check: true

# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5

# allows for small differences in the satellite and storagenode clocks
# retain.max-time-skew: 72h0m0s

# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled

# public address to listen on
server.address: :28967

# whether to debounce incoming messages
# server.debouncing-enabled: true

# if true, client leaves may contain the most recent certificate revocation for the current certificate
# server.extensions.revocation: true

# if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
# server.extensions.whitelist-signed-leaf: false

# path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
# server.peer-ca-whitelist-path: ""

# identity version(s) the server will be allowed to talk to
# server.peer-id-versions: latest

# private address to listen on
server.private-address: 127.0.0.1:7778

# url for revocation database (e.g. bolt://some.db OR redis://127.0.0.1:6378?db=2&password=abc123)
# server.revocation-dburl: bolt://config/revocations.db

# enable support for tcp fast open
# server.tcp-fast-open: true

# the size of the tcp fast open queue
# server.tcp-fast-open-queue: 256

# if true, uses peer ca whitelist checking
# server.use-peer-ca-whitelist: true

# total allocated bandwidth in bytes (deprecated)
storage.allocated-bandwidth: 0 B

# total allocated disk space in bytes
# storage.allocated-disk-space: 2.00 TB

# how frequently Kademlia bucket should be refreshed with node stats
# storage.k-bucket-refresh-interval: 1h0m0s

# path to store data in
# storage.path: config/storage

# a comma-separated list of approved satellite node urls (unused)
# storage.whitelisted-satellites: ""

# how often the space used cache is synced to persistent storage
# storage2.cache-sync-interval: 1h0m0s

# directory to store databases. if empty, uses data path
# storage2.database-dir: ""

# size of the piece delete queue
# storage2.delete-queue-size: 10000

# how many piece delete workers
# storage2.delete-workers: 1

# how many workers to use to check if satellite pieces exists
# storage2.exists-check-workers: 5

# how soon before expiration date should things be considered expired
# storage2.expiration-grace-period: 48h0m0s

# how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
# storage2.max-concurrent-requests: 0

# amount of memory allowed for used serials store - once surpassed, serials will be dropped at random
# storage2.max-used-serials-size: 1.00 MB

# a client upload speed should not be lower than MinUploadSpeed in bytes-per-second (E.g: 1Mb), otherwise, it will be flagged as slow-connection and potentially be closed
# storage2.min-upload-speed: 0 B

# if the portion defined by the total number of alive connection per MaxConcurrentRequest reaches this threshold, a slow upload client will no longer be monitored and flagged
# storage2.min-upload-speed-congestion-threshold: 0.8

# if MinUploadSpeed is configured, after a period of time after the client initiated the upload, the server will flag unusually slow upload client
# storage2.min-upload-speed-grace-duration: 10s

# how frequently Kademlia bucket should be refreshed with node stats
# storage2.monitor.interval: 1h0m0s

# how much bandwidth a node at minimum has to advertise (deprecated)
# storage2.monitor.minimum-bandwidth: 0 B

# how much disk space a node at minimum has to advertise
# storage2.monitor.minimum-disk-space: 500.00 GB

# how frequently to verify the location and readability of the storage directory
# storage2.monitor.verify-dir-readable-interval: 1m0s

# how long to wait for a storage directory readability verification to complete
# storage2.monitor.verify-dir-readable-timeout: 1m0s

# if the storage directory verification check fails, log a warning instead of killing the node
# storage2.monitor.verify-dir-warn-only: false

# how frequently to verify writability of storage directory
# storage2.monitor.verify-dir-writable-interval: 5m0s

# how long to wait for a storage directory writability verification to complete
# storage2.monitor.verify-dir-writable-timeout: 1m0s

# how long after OrderLimit creation date are OrderLimits no longer accepted
# storage2.order-limit-grace-period: 1h0m0s

# length of time to archive orders before deletion
# storage2.orders.archive-ttl: 168h0m0s

# duration between archive cleanups
# storage2.orders.cleanup-interval: 5m0s

# maximum duration to wait before trying to send orders
# storage2.orders.max-sleep: 30s

# path to store order limit files in
# storage2.orders.path: config/orders

# timeout for dialing satellite during sending orders
# storage2.orders.sender-dial-timeout: 1m0s

# duration between sending
# storage2.orders.sender-interval: 1h0m0s

# timeout for sending
# storage2.orders.sender-timeout: 1h0m0s

# if set to true, all pieces disk usage is recalculated on startup
# storage2.piece-scan-on-startup: true

# allows for small differences in the satellite and storagenode clocks
# storage2.retain-time-buffer: 48h0m0s

# how long to spend waiting for a stream operation before canceling
# storage2.stream-operation-timeout: 30m0s

# file path where trust lists should be cached
# storage2.trust.cache-path: config/trust-cache.json

# list of trust exclusions
# storage2.trust.exclusions: ""

# how often the trust pool should be refreshed
# storage2.trust.refresh-interval: 6h0m0s

# list of trust sources
# storage2.trust.sources: https://www.storj.io/dcs-satellites

# address for jaeger agent
# tracing.agent-addr: agent.tracing.datasci.storj.io:5775

# application name for tracing identification
# tracing.app: storagenode

# application suffix
# tracing.app-suffix: -release

# buffer size for collector batch packet size
# tracing.buffer-size: 0

# whether tracing collector is enabled
# tracing.enabled: true

# how frequently to flush traces to tracing agent
# tracing.interval: 0s

# buffer size for collector queue size
# tracing.queue-size: 0

# how frequent to sample traces
# tracing.sample: 0

# Interval to check the version
# version.check-interval: 15m0s

# Request timeout for version checks
# version.request-timeout: 1m0s

# server address to check its version against
version.server-address: https://version.storj.io

storage.allocated-disk-space: 20.50 TB
log.level: fatal
storage2.monitor.verify-dir-readable-interval: 5m0s
storage2.monitor.verify-dir-readable-timeout: 2m0s
storage2.monitor.verify-dir-writable-interval: 5m0s
storage2.monitor.verify-dir-writable-timeout: 2m0s
pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: true

I’ve never tried this: but does it maybe need to be “20.50TB” instead of “20.50 TB”?

1 Like

my config.yaml works all with example 7 TB, so spacebar and it works but i got 1.94 and 1.95 versions Windows GUI

You only need to stop, remove and start node again on docker if you make changes to docker command. If you want your updated config changes to take effect then you can simply restart docker with:
docker restart -t 300 <container name>

+1
or try 20.5TB

I tried 20.50TB and 20.50 TB. None worked. It shows 2TB after each restart.

I tried also 20.5TB, 20.5 TB, 20 TB. Also not working.
Should I use straight quotes?

Why not use the docker environment or docker arguments to get this working?

Like:

docker run (...)
 -e STORAGE="20.5 TB" \
# like in the manual, or undocumented and unknown whether it works:
 (...)
 --storage.allocated-disk-space="20.5 TB"

Maybe it doesn’t find the config file. It should be named as config.yaml and it should be in the --config-dir directory, not sure where that is exactly on Docker.
Also check owner/file permissions of that config file, or change file permissions to something else (like 644 if it is 600 for example).
And it works fine with 20.5 TB, 20.50 TB, 20.55 TB etc.

I use the docker run but I want to be able to use the config.yaml too. See more explanations in “Tuning the filewalker” thread.
It finds and uses the config.yaml. It’s not my first rodeo. :slight_smile:
I set up other parameters there and they work. I hear that this parameter works for Windows nodes.
Maybe someone with Linux+Docker can test it…
I tested all combos with and without " ". None work.
I don’t use the --user parameter; I start node in docker after entering sudo su mode.

I just cecked; it uses the config.yaml, no problems with path or permissions. I switched the log level to info, from fatal, in config, and the log filled with entries. I switched back to fatal and the noise stopped.

for docker it will not work, because there is a hardcoded default for variable STORAGE: storagenode-docker/Dockerfile at ca12289e9c04738b51c021cc68811a9497461eb2 · storj/storagenode-docker · GitHub
But you may provide it empty (-e STORAGE="") and then it should be properly handled: storagenode-docker/docker/entrypoint at ca12289e9c04738b51c021cc68811a9497461eb2 · storj/storagenode-docker · GitHub

2 Likes

That’s a bad choice from devs. It should work no matter where you put it, run command or config.

1 Like

Are there any other parameters that are ignored in config?
When you set something in config, you expect it to work. If is not working, than remove it from config.

1 Like

That is strange. If they have it as a hardcoded default in files/config/config.yaml… why on earth would they also hardcode it a second time in the docker environment variables?

Isn’t config.yaml the single source of truth… and everything else is just optional overrides?

The greatest node with I have is 19TB with the following addition:

-e STORAGE=“19TB”

for 20.5 I would try:

-e STORAGE=“20.5TB”

And it should override the config.yaml-settings. It then doesn’t matter what is in the config.yaml as the parameter has a higher prio.

I know about docker run, already said that. I’m using it!
But, I want to use the config file, because it is easier to change things, and restart the node, without redo the container. And I already explained it in Tuning the filewalker. It runs faster when you have no ingess. You can stop the ingress by reducing the allocated space.
So you can have one config with filewalker on and 1TB allocated, and one config with FW off and xxTB allocated. If you want to run FW like once a month or every few months, using 2 configs is the quickest and convenient way. It reduces the error risk too.
And… if there is a parameter in config that you can edit, you expect to work, not to be ignored. Maybe there are others too, that are harder to detect when they are ignored. Hardcoding a parameter and ignoring the config is a very very bad decision. Why would you choose to do that?

again, are you removing the container after making the change? (sudo docker rm storagenode)?

I find if I don’t remove the container it gets stuck with old allocated space.

Yes, if I put something in the config, I remove that parameter from the run command and the container must be stopped, removed and run with the modified command.
In this case, I removed the storage param. from run command, because I wanted to use the one from the config, and I recreated the container without that parameter, with stop, rm, run.

Did you try to provide it empty?

Devs already answeared. The 2TB is hardcoded. If you don’t provide it in run comm., it will use 2TB. The config for docker is ignored.