Node Identity Issue

Hello folks,

I’ve been trying to improve my nodes’ vulnerability against electricity outages (sometimes, some of the folders owner changes right after electricity outages, I need to fix such issues by using chown username:username command). To prevent this issue HDD driver’s hold-up time needs to be increased (I have simple 12V/2A adapter for my USB to Sata driver). While trying my simple resettable fuse + 6mF capacitor buffer circuit the HDD could not start. And, after taking off that intermediate board from the system, node started giving below output:

09:49:04 username@raspberrypi scripts → docker logs --tail 50 storagenode1
2024-09-09T18:49:00Z    ERROR   pieces  used-space-filewalker failed    {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:731\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:81\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:00Z    ERROR   piecestore:cache        encountered error while computing space used by satellite                                                                                                   {"Process": "storagenode", "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:731\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:81\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-09T18:49:00Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace f474535a19db00db4f8071a1be6c2551f4ded6a6e38f0818c68c68d000000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace f474535a19db00db4f8071a1be6c2551f4ded6a6e38f0818c68c68d000000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:273\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:100\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:02Z    ERROR   failure during run      {"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12HFEBiqo4Rkcv9VNHvRxVqMf8VfobtZCQj48b8YzfpnKTnyouW) does not match running node's ID (125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12HFEBiqo4Rkcv9VNHvRxVqMf8VfobtZCQj48b8YzfpnKTnyouW) does not match running node's ID (125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12HFEBiqo4Rkcv9VNHvRxVqMf8VfobtZCQj48b8YzfpnKTnyouW) does not match running node's ID (125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66)
2024-09-09 18:49:02,547 INFO exited: storagenode (exit status 1; not expected)
2024-09-09 18:49:03,552 INFO spawned: 'storagenode' with pid 52
2024-09-09 18:49:03,553 WARN received SIGQUIT indicating exit request
2024-09-09 18:49:03,554 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-09-09T18:49:03Z    INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2024-09-09 18:49:03,559 INFO stopped: storagenode-updater (exit status 0)
2024-09-09T18:49:03Z    INFO    Configuration loaded    {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-09-09T18:49:03Z    INFO    Anonymized tracing enabled      {"Process": "storagenode"}
2024-09-09T18:49:03Z    INFO    Operator email  {"Process": "storagenode", "Address": "nazimyildiz90@gmail.com"}
2024-09-09T18:49:03Z    INFO    Operator wallet {"Process": "storagenode", "Address": "0x412c63a480cfcc6a3bd771cc33a181ba8e098067"}
2024-09-09T18:49:03Z    INFO    server  kernel support for server-side tcp fast open remains disabled.  {"Process": "storagenode"}
2024-09-09T18:49:03Z    INFO    server  enable with: sysctl -w net.ipv4.tcp_fastopen=3  {"Process": "storagenode"}
2024-09-09T18:49:04Z    INFO    Telemetry enabled       {"Process": "storagenode", "instance ID": "125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66"}
2024-09-09T18:49:04Z    INFO    Event collection enabled        {"Process": "storagenode", "instance ID": "125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66"}
2024-09-09T18:49:04Z    INFO    db.migration    Database Version        {"Process": "storagenode", "version": 61}
2024-09-09T18:49:05Z    INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.                                                                                    {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.                                                                                        {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    Node 125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66 started        {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    Public server started on [::]:28967     {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    Private server started on 127.0.0.1:7778        {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    failed to sufficiently increase send buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.          {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    bandwidth       Persisting bandwidth usage cache to db  {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-09T18:49:06Z    INFO    lazyfilewalker.trash-cleanup-filewalker starting subprocess     {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-09T18:49:06Z    INFO    collector       expired pieces collection started       {"Process": "storagenode"}
2024-09-09T18:49:06Z    INFO    trust   Scheduling next refresh {"Process": "storagenode", "after": "5h14m57.015154482s"}
2024-09-09T18:49:06Z    INFO    lazyfilewalker.trash-cleanup-filewalker subprocess started      {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-09T18:49:06Z    ERROR   services        unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12HFEBiqo4Rkcv9VNHvRxVqMf8VfobtZCQj48b8YzfpnKTnyouW) does not match running node's ID (125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12HFEBiqo4Rkcv9VNHvRxVqMf8VfobtZCQj48b8YzfpnKTnyouW) does not match running node's ID (125ErUdwQuETNYrMWXGuWCp5sGLoAvrAP4mfjhvFyYHmNriSZ66)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   version failed to get process version info      {"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   piecestore:cache        error during init space usage db:       {"Process": "storagenode", "error": "piece space used: context canceled", "errorVerbose": "piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:55\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:60\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   nodestats:cache Get pricing-model/join date failed      {"Process": "storagenode", "error": "context canceled"}
2024-09-09T18:49:06Z    ERROR   gracefulexit:chore      error retrieving satellites.    {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:197\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:55\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:48\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-09-09T18:49:06Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-09T18:49:06Z    INFO    lazyfilewalker.trash-cleanup-filewalker subprocess exited with status   {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2024-09-09T18:49:06Z    ERROR   collector       error during expired pieces collection  {"Process": "storagenode", "count": 0, "error": "pieces error: context canceled", "errorVerbose": "pieces error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:614\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:195\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:436\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-09-09T18:49:06Z    ERROR   collector       error during collecting pieces:         {"Process": "storagenode", "error": "pieces error: context canceled", "errorVerbose": "pieces error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:614\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-09T18:49:06Z    ERROR   gracefulexit:blobscleaner       couldn't receive satellite's GE status  {"Process": "storagenode", "error": "context canceled"}
2024-09-09T18:49:06Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-09-09T18:49:06Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-09T18:49:06Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-09-09T18:49:06Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-09T18:49:06Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-09-09T18:49:06Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

Seems like some of the node’s files were changed or broken during my experimental work.

Where I can find my actual node’s identity number, is it inside the identity.key or is it hidden inside the Authorization token?

So far:

  • The node container was removed and started again by pulling the docker image…
  • I’ve checked the .local/share/storj/identity/storagenode folder to get node identity number by I could not see it there
  • The disk seems healthy but I’ll execute fsck to be %100 sure

Thank you very much for any hints.

The issue has been fixed after rebooting my raspberry pi :slight_smile:
Seems like somehow the data and the node-id wasn’t match up (I have 3 nodes on my rpi)

Please just buy a UPS. This problem is already solved. There is no need to reinvent bicycles from sap and twigs.

:scream_cat:

No… just no… Buy a UPS.

Also, don’t experiment on a production system…

This post is surreal.

1 Like

It’s simple - you used the wrong path to the identity folder. I do not know, how the reboot can fix that, unless you also fixed the path, or even worse - used the SETUP command and now the identity in the protection file has been overwritten with a foreign identity and soon you will have two disqualified nodes in that case.
Please make sure that your scripts doesn’t include the SETUP=true anywhere.

1 Like

I did not execute SETUP script (code below).

docker run --rm -e SETUP="true" \
    --user $(id -u):$(id -g) \
    --mount type=bind,source="/home/username/.local/share/storj/identity/storagenode1",destination=/app/identity \
    --mount type=bind,source="/mnt/storj1/",destination=/app/config \
    --name storagenode1 storjlabs/storagenode:latest

The starting script was run after removing the node(docker rm storagenode1)

docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28967:28967/tcp \
    -p 28967:28967/udp \
    -p 14002:14002 \
    -e WALLET="0xasdasdasdasd" \
    -e EMAIL="abc@gmail.com" \
    -e ADDRESS="STATICIP:28967" \
    -e STORAGE="8.0TB" \
    --user $(id -u):$(id -g) \
    --mount type=bind,source="/home/username/.local/share/storj/identity/storagenode1",destination=/app/identity \
    --mount type=bind,source="/mnt/storj1",destination=/app/config \
    --name storagenode1 storjlabs/storagenode:latest

I guess, my node is on the safe side according to below log output(just taken):

11:43:16 username@raspberrypi scripts → docker logs --tail 50 storagenode1
2024-09-10T08:43:10Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "YUZMAJT46WNC75LLRTD3AGZEH2XCZC2ZTU2DPI5ZBCRJR62KOAQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 5888, "Remote Address": "5.161.209.180:60535"}
2024-09-10T08:43:13Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "CREVNP67KC5FH7BDN62EYPMZWMMMBV2CMNJCJZN3BYO5ILTVUY3A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2048, "Remote Address": "79.127.226.100:50850"}
2024-09-10T08:43:13Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "CREVNP67KC5FH7BDN62EYPMZWMMMBV2CMNJCJZN3BYO5ILTVUY3A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2048, "Remote Address": "79.127.226.100:50850"}
2024-09-10T08:43:14Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "I7G25U5PGWUVW4HF6VMLZA7R6GQCVYOP5B2XYQVFCAYI4G4L6K3Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.228:33070", "Available Space": 231298518451}
2024-09-10T08:43:14Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "I7G25U5PGWUVW4HF6VMLZA7R6GQCVYOP5B2XYQVFCAYI4G4L6K3Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.228:33070", "Size": 1536}      
2024-09-10T08:43:15Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "IRSIMW5F5U2MN4VEG2G4CLBTJJTBO5Q7YYD5LSIPWJNMNC73M5JQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 4096, "Remote Address": "5.161.65.146:21660"}
2024-09-10T08:43:15Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "RHPJ3VQS5CVNDTB62JKRZV54JCLWJLIBAQZKPD5GXUXNBYSB7UXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "121.127.47.27:40414", "Available Space": 231298516403}
2024-09-10T08:43:15Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "RHPJ3VQS5CVNDTB62JKRZV54JCLWJLIBAQZKPD5GXUXNBYSB7UXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "121.127.47.27:40414", "Size": 15872}      
2024-09-10T08:43:16Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "IRSIMW5F5U2MN4VEG2G4CLBTJJTBO5Q7YYD5LSIPWJNMNC73M5JQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 4096, "Remote Address": "5.161.65.146:21660"}
2024-09-10T08:43:16Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "2XOBWCJCRBMP6O2N7XKH6WSW2BXJQKOXRUCDJTCQNP77BAWF5D4A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 1536, "Remote Address": "79.127.226.101:56178"}
2024-09-10T08:43:16Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "2XOBWCJCRBMP6O2N7XKH6WSW2BXJQKOXRUCDJTCQNP77BAWF5D4A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 1536, "Remote Address": "79.127.226.101:56178"}
2024-09-10T08:43:16Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "2Z5JG5VK7H2LA223I56KAZS7SKWLZNPZSOZOTAPSLEYV4L2LUZHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2319360, "Remote Address": "5.161.254.108:27933"}
2024-09-10T08:43:17Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "2Z5JG5VK7H2LA223I56KAZS7SKWLZNPZSOZOTAPSLEYV4L2LUZHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2319360, "Remote Address": "5.161.254.108:27933"}
2024-09-10T08:43:18Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "HYWKGQUXO35JVNJWDF6UXYZ7RKIYT3QLLKLERPGZ55EHGRSLR3JA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 768, "Remote Address": "79.127.226.99:55656"}
2024-09-10T08:43:18Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "HYWKGQUXO35JVNJWDF6UXYZ7RKIYT3QLLKLERPGZ55EHGRSLR3JA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 768, "Remote Address": "79.127.226.99:55656"}
2024-09-10T08:43:20Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "RQMGYX3DULXQIAZJSNUNJVKGAOAF7SSZZXSGHTV7BSCGTDAU4VFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "199.102.71.23:40036"}
2024-09-10T08:43:20Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "RQMGYX3DULXQIAZJSNUNJVKGAOAF7SSZZXSGHTV7BSCGTDAU4VFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "199.102.71.23:40036"}
2024-09-10T08:43:20Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "LKBGSM5Z7EQYCJMV2ILQVGO22WTLDKFKC2E762BX2OIQEBO7WNLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.97:58390", "Available Space": 231298500019}
2024-09-10T08:43:20Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "LKBGSM5Z7EQYCJMV2ILQVGO22WTLDKFKC2E762BX2OIQEBO7WNLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.97:58390", "Size": 15360}      
2024-09-10T08:43:20Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "AJFTJZZLWODVEIB3XSNSEB2ZGHTVHYQZ6YBPQX54CYJTJXYRJ3LQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.219.46:48988", "Available Space": 231298484147}
2024-09-10T08:43:22Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "AJFTJZZLWODVEIB3XSNSEB2ZGHTVHYQZ6YBPQX54CYJTJXYRJ3LQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.219.46:48988", "Size": 2319360}    
2024-09-10T08:43:23Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "6QSXB5LUMOIMLBIKKYBPNDTWWOQUIJ4QR6ILCDSOCZ2PLTQ7JPMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "199.102.71.16:50112"}
2024-09-10T08:43:23Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "6QSXB5LUMOIMLBIKKYBPNDTWWOQUIJ4QR6ILCDSOCZ2PLTQ7JPMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "199.102.71.16:50112"}
2024-09-10T08:43:24Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "YRRXLAX4VSTJYWCFQXOD3DQWYXLSRSYESRKANDMJY5ZIIBWGDHKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2174464, "Remote Address": "199.102.71.66:48770"}
2024-09-10T08:43:25Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "D37DCQLVSR446U4YC4B6MLT32OSFLPL5O7JELHLVPR4NZFF4WTDA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.98:48312", "Available Space": 231296164275}
2024-09-10T08:43:25Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "D37DCQLVSR446U4YC4B6MLT32OSFLPL5O7JELHLVPR4NZFF4WTDA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.98:48312", "Size": 2560}       
2024-09-10T08:43:27Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "YRRXLAX4VSTJYWCFQXOD3DQWYXLSRSYESRKANDMJY5ZIIBWGDHKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2174464, "Remote Address": "199.102.71.66:48770"}
2024-09-10T08:43:27Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "6MDOLACT5722N4DIL3JFNBIBIMOANOUCYWGSCBMG4BTMNKL3A3AA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "5.161.236.145:53519"}
2024-09-10T08:43:28Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "KMKIBQSUFMWE4D3Z7TP4BE5EGI7P4AV6TLTPZJOJG3PP37VLH4GQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.23:48814", "Available Space": 231296161203}
2024-09-10T08:43:28Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "KMKIBQSUFMWE4D3Z7TP4BE5EGI7P4AV6TLTPZJOJG3PP37VLH4GQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.23:48814", "Size": 288768}
2024-09-10T08:43:28Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "MCIHHXBYCIJE3JIWAQOM4AK4KF64YGGOAGCBJVVZIE4WNWDGHMPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.99:46978", "Available Space": 231295871923}
2024-09-10T08:43:28Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "6MDOLACT5722N4DIL3JFNBIBIMOANOUCYWGSCBMG4BTMNKL3A3
AA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "5.161.236.145:53519"}
2024-09-10T08:43:28Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "MCIHHXBYCIJE3JIWAQOM4AK4KF64YGGOAGCBJVVZIE4WNWDGHMPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.99:46978", "Size": 87040}      
2024-09-10T08:43:29Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "YVYYUIGTF6TAPD2SPINNUTVWWNVIZTNWZUYX57W5GGDTIYOTTLOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 3840, "Remote Address": "199.102.71.66:60732"}
2024-09-10T08:43:29Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "GP3YAOU6NWOGUF23WLSAE64B5FTULZOBCZHV5BPHFZNXM2CEJTOQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "79.127.205.238:47374"}
2024-09-10T08:43:29Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "YVYYUIGTF6TAPD2SPINNUTVWWNVIZTNWZUYX57W5GGDTIYOTTLOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 3840, "Remote Address": "199.102.71.66:60732"}
2024-09-10T08:43:29Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "M2E6LNND2GBF3SD4X7INL75JNZWGZ7NHBXLVIK7WMHHWTGKBRSAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.39:40436", "Available Space": 231295784371}
2024-09-10T08:43:29Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "M2E6LNND2GBF3SD4X7INL75JNZWGZ7NHBXLVIK7WMHHWTGKBRSAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.39:40436", "Size": 768}        
2024-09-10T08:43:30Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "GP3YAOU6NWOGUF23WLSAE64B5FTULZOBCZHV5BPHFZNXM2CEJTOQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET", "Offset": 111616, "Size": 69632, "Remote Address": "79.127.205.238:47374"}
2024-09-10T08:43:31Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "PAUAHSIX3OGRGTE2424BHS23FKKD5EZGZPINQDDTGZ7G6FQT5UFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 13824, "Remote Address": "178.156.132.39:9924"}
2024-09-10T08:43:31Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "64I6HTRWV3QSQOKNEJELRLP35K6F6ZEACTQZOODT4F3BYSORMAEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "109.61.92.65:51870"}
2024-09-10T08:43:31Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "PAUAHSIX3OGRGTE2424BHS23FKKD5EZGZPINQDDTGZ7G6FQT5UFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 13824, "Remote Address": "178.156.132.39:9924"}
2024-09-10T08:43:31Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "5SA3N4QXG4AME5VJHYBI5TLDLXDBZY25UJANWDLXYZS55FVEB7RQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "109.61.92.70:43162"}
2024-09-10T08:43:31Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "LWOKCYLT2QIRHL75IR5S2QCHA22R55OCAZYUVMXOWWRSWGLCPDQA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.98:33784", "Size": 2101504}    
2024-09-10T08:43:32Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "64I6HTRWV3QSQOKNEJELRLP35K6F6ZEACTQZOODT4F3BYSORMAEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "109.61.92.65:51870"}
2024-09-10T08:43:33Z    INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "4QGPSGUQ5NWQYUORTKOYXFJOU2HB7LLUWFMPC5MLNBHAV2G7KXHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2048, "Remote Address": "199.102.71.63:60110"}
2024-09-10T08:43:33Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "MAK6Y562WXRFDHFKXMVEMACXT2XBZXDP5EINN4EMYTQNGECX4NXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.102:37664", "Available Space": 231293681075}
2024-09-10T08:43:33Z    INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "4QGPSGUQ5NWQYUORTKOYXFJOU2HB7LLUWFMPC5MLNBHAV2G7KXHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2048, "Remote Address": "199.102.71.63:60110"}
2024-09-10T08:43:33Z    INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "MAK6Y562WXRFDHFKXMVEMACXT2XBZXDP5EINN4EMYTQNGECX4NXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.102:37664", "Size": 15872}     
2024-09-10T08:43:33Z    INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "STRMX3QISRLKQELNE7FKGRLD25QEABFAX4WHZDRDTZTM5WONDB6A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.101:51540", "Available Space": 231293664691}

Thanks

It depends on how often electricity-outage occurs and how long it lasts in general, so, these two parameters should be considered before choosing the right solution(using UPS or increasing the hold-up time of the 12V rail to 10ms or 20ms should be enough to complete the last query for the controller of the HDD usually 12V adapters doesn’t have enough output capacitance).

But anyway, always better to have UPS if we do not include the money in that equation.

This is 1 or 2 I/O operators. I’ve seen tens, sometimes hundreds of writes pending on a Storj drive.

That’s really interesting though. I’m curious what kind of power problems can just change directory owners stored in directory inodes, when there are basically no writes to these inodes during regular node operation.

I would like to suggest to move/copy it to the disk with its data, here:

and use this new path for the identity instead of the system drive, because nether identity nor data could be used without each other. It should reduce a risk of confusion the identity and its data in a multinode setups.
It could also be useful to have a backup of the identity for a rare case, when only identity is corrupted on the disk with data (usually they corrupted both, but the node can survive up to 4% of loss, unlike the case where one of the files of the identity would become unrecoverable, it would be a shame to lose a node because of losing one file from the identity folder).

2 Likes

My bet is something happen during boot process. If it happen again, you could use a live cd usb to boot from usb and check the ownership of your dir (check the ownership number - not the name). Then you could use chattr +i set your dir to immutable, and try to reboot again, see what break during boot and fix it.

3 Likes

Thanks for the tip, I’ll try.

What happens is the second column values change (including some files inside storage/, trash/ etc…), last time some of them changed from current username to docker but before that the change had happened to root.
The OS of the raspberry pi hasn’t been updated for a long time, I’ll do that.
image

This is 1 or 2 I/O operators. I’ve seen tens, sometimes hundreds of writes pending on a Storj drive.

My assumption was to prolong the turn-off time of the HDDs (which means RPI will of before HDDs and there will be no extra request from “Storj” to write). In my system, there are USB to SATA converters and currently, I have 2 different brands (Some nodes have Ugreen, and others have Axagon).
I don’t know how exactly the controller inside the HDD works (the length of the queue it has, and how the controller behaves during the power off…). If 1 I/O operation takes ~10ms and the HDD’s controller has a long queue then probably if the problem is what I’m suspecting then increasing the output capacitance is not going to solve the issue.
I could not find info in the web about it, seems like, one HDD needs to be disassembled to check out the datasheet of some critical ICs on it.
In the datasheet of Seagate Ironwolf 8TB, there is almost nothing(it was useful to calculate the amount of capacitance to provide 20ms hold-up time): Data Sheet (seagate.com)

I’ve seen tens, sometimes hundreds of writes pending on a Storj drive.

Could you please tell me how I can check it or monitor it?

That’s really interesting though. I’m curious what kind of power problems can just change directory owners stored in directory inodes, when there are basically no writes to these inodes during regular node operation.

After @kocoten1992’s comment, my attention will be on OS related things… His feedback sounds more realistic right now than my assumption.

You want to study NCQ. Most modern drives support it. You can disable it on the OS side, but probably losing a bit of performance. I recall—though I cannot figure out where I found it—that for HDDs a queue of 16 items is usually expected. That’s for the queue internal to HDDs. There’s also a queue in the operating system as well.

iostat -xt 1 as an example.

Some namespace user mapping issues… yeah, that could make sense.

1 Like

There are many time it is a “me” problem (set up some script then forget about it). There is also auditd, you could try that to figure out who touch your dir, that might give some clue…

1 Like

It’s possible and depends on how do you run a docker command, and what options did you specify.
For example, if you would use sudo docker run, it will use the root and the ownership could change to root:root for some files/folders. If you also use --user $(id -u):$(id -g) in your docker run command, some ownership could change to your user (id -u) and group (id -g).
If you do not use sudo, then the docker run command would use your user and group docker to run a container. If you also use --user $(id -u):$(id -g) it would try to use your user and your group inside the container.
So, I would recommend either do not use sudo or always use sudo, do not mix. For any of the choice you need to update the owner to an appropriate value recursively.

1 Like

Thanks Alexey, good to know, I wasn’t aware of this (I remember that I was mixing this some time ago)…

If you do not use sudo , then the docker run command would use your user and group docker to run a container. If you also use --user $(id -u):$(id -g) it would try to use your user and your group inside the container.

I’m gonna stop my nodes and start them again without putting sudo in front of the docker command. As there is –user $(id -u):$(id -g) command in the starting script all should be fine.

01:07:42 anaximandros@raspberrypi scripts → cat ~/scripts/storj1-start.sh 
docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28967:28967/tcp \
    -p 28967:28967/udp \
    -p 14002:14002 \
    -e WALLET="0xancdcdcasdasd" \
    -e EMAIL="email@gmail.com" \
    -e ADDRESS="STATICIP:28967" \
    -e STORAGE="8.0TB" \
    --user $(id -u):$(id -g) \
    --mount type=bind,source="/home/anaximandros/.local/share/storj/identity/storagenode1",destination=/app/identity \
    --mount type=bind,source="/mnt/storj1",destination=/app/config \
    --name storagenode1 storjlabs/storagenode:latest

Just one question, is there a way to be %100 sure that –user $(id -u):$(id -g) this command was executed properly by the docker? How can I check it?

Lastly, this is the current status of /mnt/storj1 folder:

01:06:40 anaximandros@raspberrypi scripts → stat /mnt/storj1/
  File: /mnt/storj1/
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: 811h/2065d      Inode: 2           Links: 7
Access: (0755/drwxr-xr-x)  Uid: ( 1000/anaximandros)   Gid: ( 1000/anaximandros)
Access: 2024-09-12 13:06:39.925834520 +0300
Modify: 2024-09-12 11:29:03.946549945 +0300
Change: 2024-09-12 11:29:03.946549945 +0300
 Birth: 2023-01-23 23:56:16.000000000 +0300

Both, UserID and GroupID are 1000 (which means anaximandros)

You would see a different user and group for the files inside the storagenode location, like root or maybe just ids (if they are not handled properly).
You may also check inside the container by the simple command ls -l:

docker exec -it storagenode ls -l config/storage
1 Like

I generally would not rely on user in the container matching the user on the host. I’d say on a modern properly configured linux it will most likely won’t match, being in a different namespace. It’s best to open shell in the container and check from there, as is done in this example:

Further reading: https://www.redhat.com/sysadmin/user-namespaces-selinux-rootless-containers

2 Likes