Node offline segmentation docker

Someone has made some serious xit with my node. I’m reconfiguring it! the error that appear is this one. storagewars@raspberrypi:~/Hds/HD1 $ sudo service docker start Job for docker.service failed because a fatal signal was delivered causing the control process to dump core. See "systemctl status docker.service" and "journalctl -xe" for details. storagewars@raspberrypi:~/Hds/HD1 $ ░ The job identifier is 2066. Feb 10 20:02:36 raspberrypi systemd[1]: docker.service: Main process exited, code=dumped, status=11/SEGV ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://www.debian.org/support ░░ ░░ An ExecStart= process belonging to unit docker.service has exited.
Only ssh works on the PI

1 Like

Had it working over VNC. After a crash doesn’t boot properly. Purchased a microhdmi still doesn’t show image even on VNC. Rebuild writable windows partition on sd card to include ssh file. Removed 1 drive. Edited fstab file. Turns on and i can connect over ssh… After all this docker doesn’t work! :upside_down_face::face_with_raised_eyebrow::face_with_raised_eyebrow: Some guides on to make it work are appreciated

1 Like

I had experience with RPi when you install image on SD rather use RPi imager tool. I think it’s depreciated but maybe it’s still somewhere around. Where you can also set SSH config.

But when you did fresh install why still so much trouble? Maybe hardware related?
You said remove 1 drive how many drives are on RPi? you know its USB is not very powerful I wouldn’t power any disk with RPi usb only

1 Like

Thank you for your reply! The drive connected is powered. How to make it work? I can connect over ssh but i can’t make docker to start it’s service!

A segunda, 10/02/2025, 23:24, LxdrJ via Storj Community Forum (official) <noreply@forum.storj.io> escreveu:

How did you install docker? On what OS?

1 Like

Cmd line it’s running a 32bit version

A terça, 11/02/2025, 06:35, LxdrJ via Storj Community Forum (official) <noreply@forum.storj.io> escreveu:

1 Like

Something like this?
https://docs.docker.com/engine/install/raspberry-pi-os/#install-using-the-repository

1 Like

It’s your advice? Will try it later today!

A terça, 11/02/2025, 08:23, LxdrJ via Storj Community Forum (official) <noreply@forum.storj.io> escreveu:

1 Like

I had it working previously. After it crashed it doesn’t work. I will try to install it later on. Maybe tomorrow. I don’t have time today.

A terça, 11/02/2025, 08:40, Tiago Carvalho <tcarvalho85@gmail.com> escreveu:

1 Like

I´ve got some typos here:

2025-02-11T18:22:57Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:302\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:105\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T18:22:57Z    INFO    lazyfilewalker.trash-cleanup-filewalker subprocess exited with status   {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2025-02-11T18:22:57Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:196\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:486\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:86\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2025-02-11T18:22:57Z    ERROR   failure during run      {"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).verifyStorageDir:161\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:109\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2025-02-11 18:22:57,512 INFO stopped: storagenode (exit status 1)
2025-02-11 18:22:57,514 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM) ```

after editing the start of the node i get:

```2025-02-11T18:39:33Z    INFO    Anonymized tracing enabled      {"Process": "storagenode"}
2025-02-11T18:39:33Z    INFO    Operator email  {"Process": "storagenode", "Address": "tcar#5@gmail.com"}
2025-02-11T18:39:33Z    INFO    Operator wallet {"Process": "storagenode", "Address": "0x20ba0ed29b38f63cfe96193b1e858a"}
2025-02-11T18:39:33Z    ERROR   failure during run      {"Process": "storagenode", "error": "Error opening database on storagenode: group:\n--- stat config/storage/blobs: no such file or directory\n--- stat config/storage/temp: no such file or directory\n--- stat config/storage/trash: no such file or directory", "errorVerbose": "Error opening database on storagenode: group:\n--- stat config/storage/blobs: no such file or directory\n--- stat config/storage/temp: no such file or directory\n--- stat config/storage/trash: no such file or directory\n\tmain.cmdRun:69\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:272"}
Error: Error opening database on storagenode: group:
--- stat config/storage/blobs: no such file or directory
--- stat config/storage/temp: no such file or directory
--- stat config/storage/trash: no such file or directory
2025-02-11 18:39:33,706 INFO exited: storagenode (exit status 1; not expected)
2025-02-11 18:39:34,708 INFO gave up: storagenode entered FATAL state, too many start retries too quickly
2025-02-11 18:39:35,710 WARN received SIGQUIT indicating exit request
2025-02-11 18:39:35,711 INFO waiting for processes-exit-eventlistener, storagenode-updater to die
2025-02-11T18:39:35Z    INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2025-02-11 18:39:35,715 INFO stopped: storagenode-updater (exit status 0)

storagewars@raspberrypi:~/Hds/HD1/storage $ ls
bandwidth.db  garbage heldamount.db  notifications.db  piece_expiration.db  pieceinfo.db          pricing.db     satellites.db  storage_usage.db  trash           used_space_per_prefix.db
blobs         garbage_collection_filewalker_progress.db  info.db        orders.db         piece_expirations    piece_spaced_used.db  reputation.db  secret.db      temp              used_serial.db

???

Guidance here would be welcome!

What does „ls -lh“ show in storage folder?
Why do you post email and wallet address in public forum?

my mistake! here’s the output

storagewars@raspberrypi:~/Hds/HD1 $ ls -lh
total 60K
-rwxrwxrwx 1 storagewars storagewars  11K Jun  7  2024 config.yaml
drwxrwxrwx 2 storagewars storagewars 4.0K Nov 30  2023 ID2
drwxrwxrwx 2 storagewars storagewars 4.0K Nov 30  2023 lost+found
drwxrwxrwx 4 storagewars storagewars 4.0K Aug 29  2023 orders
drwxr-xr-x 2 root        root        4.0K May 20  2024 retain
-rwxrwxrwx 1 storagewars storagewars  32K Feb 11 20:09 revocations.db
drwxrwxrwx 7 storagewars storagewars 4.0K Feb 11 20:09 storage
-rw------- 1 root        root        1.9K Feb 11 20:09 trust-cache.json
storagewars@raspberrypi:~/Hds/HD1 $
storagewars@raspberrypi:~/Hds/HD1/storage $ ls -lh
total 476K
-rwxrwxrwx 1 storagewars storagewars  36K Feb 11 20:10 bandwidth.db
drwxrwxrwx 6 root        root        4.0K Aug 29  2023 blobs
drwxrwxrwx 2 storagewars storagewars 4.0K Apr  7  2024 garbage
-rw-r--r-- 1 root        root         24K Feb 11 20:10 garbage_collection_filewa                                                                                                                                                             lker_progress.db
-rwxrwxrwx 1 storagewars storagewars  32K Feb 11 20:10 heldamount.db
-rwxrwxrwx 1 storagewars storagewars  16K Feb 11 20:10 info.db
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 notifications.db
-rwxrwxrwx 1 storagewars storagewars  32K Feb 11 20:10 orders.db
-rwxrwxrwx 1 storagewars storagewars  36K Feb 11 20:10 piece_expiration.db
drwxrwxrwx 2 storagewars storagewars 4.0K Feb 11 18:21 piece_expirations
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 pieceinfo.db
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 piece_spaced_used.db
-rw-r--r-- 1 root        root         24K Feb 11 20:10 pricing.db
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 reputation.db
-rw-r--r-- 1 root        root         32K Feb 11 20:10 satellites.db
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 secret.db
-rw-r--r-- 1 root        root         24K Feb 11 20:10 storage_usage.db
drwxrwxrwx 2 storagewars storagewars  36K May  6  2024 temp
drwxrwxrwx 6 root        root        4.0K Apr 19  2024 trash
-rwxrwxrwx 1 storagewars storagewars  20K Feb 11 20:10 used_serial.db
-rwxrwxrwx 1 storagewars storagewars  24K Feb 11 20:10 used_space_per_prefix.db
storagewars@raspberrypi:~/Hds/HD1/storage $v ```

should change ownership and permissions?

Yes Docker run user has no permission on root folder

all to root or storagewars? All to drwxrwxrwx? Changing user owner now

storagewars@raspberrypi:~ $ sudo su
root@raspberrypi:/home/storagewars# cd /
root@raspberrypi:/# chown -R storagewars: /home/storagewars/Hds/HD1/storage


can’t change owner on blobs and on trash

root@raspberrypi:/# ls -l /home/storagewars/Hds/HD1/storage/
total 476
-rwxrwxrwx 1 storagewars storagewars 36864 Feb 11 20:37 bandwidth.db
drwxrwxrwx 6 root        root         4096 Aug 29  2023 blobs
drwxrwxrwx 2 storagewars storagewars  4096 Apr  7  2024 garbage
-rw-r--r-- 1 storagewars storagewars 24576 Feb 11 20:37 garbage_collection_filewalker_progress.db
-rwxrwxrwx 1 storagewars storagewars 32768 Feb 11 20:37 heldamount.db
-rwxrwxrwx 1 storagewars storagewars 16384 Feb 11 20:37 info.db
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 notifications.db
-rwxrwxrwx 1 storagewars storagewars 32768 Feb 11 20:37 orders.db
-rwxrwxrwx 1 storagewars storagewars 36864 Feb 11 20:37 piece_expiration.db
drwxrwxrwx 2 storagewars storagewars  4096 Feb 11 18:21 piece_expirations
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 pieceinfo.db
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 piece_spaced_used.db
-rw-r--r-- 1 storagewars storagewars 24576 Feb 11 20:37 pricing.db
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 reputation.db
-rw-r--r-- 1 storagewars storagewars 32768 Feb 11 20:37 satellites.db
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 secret.db
-rw-r--r-- 1 storagewars storagewars 24576 Feb 11 20:37 storage_usage.db
drwxrwxrwx 2 storagewars storagewars 36864 May  6  2024 temp
drwxrwxrwx 6 root        root         4096 Apr 19  2024 trash
-rwxrwxrwx 1 storagewars storagewars 20480 Feb 11 20:37 used_serial.db
-rwxrwxrwx 1 storagewars storagewars 24576 Feb 11 20:37 used_space_per_prefix.db
root@raspberrypi:/#

update

2025-02-11T20:55:58Z    INFO    failed to sufficiently increase send buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.      {"Process": "storagenode"}
2025-02-11T20:55:58Z    ERROR   services        unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).verifyStorageDir:161\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:109\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T20:55:58Z    ERROR   nodestats:cache Get pricing-model/join date failed      {"Process": "storagenode", "error": "context canceled"}
2025-02-11T20:55:58Z    ERROR   version failed to get process version info      {"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T20:55:58Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2025-02-11T20:55:58Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-02-11T20:55:58Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2025-02-11T20:55:58Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-02-11T20:55:58Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2025-02-11T20:55:58Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-02-11T20:55:58Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2025-02-11T20:55:58Z    INFO    contact:service context cancelled       {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-02-11T20:55:58Z    INFO    piecemigrate:chore      all enqueued for migration; will sleep before next pooling      {"Process": "storagenode", "active": {}, "interval": "10m0s"}
2025-02-11T20:55:58Z    ERROR   piecestore:cache        error during init space usage db:       {"Process": "storagenode", "error": "piece space used: context canceled", "errorVerbose": "piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:55\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:65\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T20:55:58Z    INFO    lazyfilewalker.trash-cleanup-filewalker subprocess exited with status   {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2025-02-11T20:55:58Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:196\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:486\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:86\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2025-02-11T20:55:58Z    ERROR   failure during run      {"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).verifyStorageDir:161\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:109\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2025-02-11 20:55:58,576 INFO exited: storagenode (exit status 1; not expected)
2025-02-11 20:55:59,583 INFO spawned: 'storagenode' with pid 47
2025-02-11 20:55:59,585 WARN received SIGQUIT indicating exit request
2025-02-11 20:55:59,586 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2025-02-11T20:55:59Z    INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2025-02-11 20:55:59,593 INFO stopped: storagenode-updater (exit status 0)
2025-02-11 20:55:59,596 INFO stopped: storagenode (terminated by SIGTERM)
2025-02-11 20:55:59,597 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

update3

2025-02-11T21:05:01Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-02-11T21:05:01Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-02-11T21:05:01Z    ERROR   lazyfilewalker.used-space-filewalker    failed to start subprocess      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2025-02-11T21:05:01Z    ERROR   pieces  used-space-filewalker failed    {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:774\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T21:05:01Z    ERROR   pieces  used-space-filewalker failed    {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": false, "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T21:05:01Z    ERROR   piecestore:cache        encountered error while computing space used by satellite       {"Process": "storagenode", "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-02-11T21:05:01Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:302\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:105\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-02-11T21:05:01Z    ERROR   failure during run      {"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).verifyStorageDir:161\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:109\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory

Why not User: group?

like this? chown -R storagewars:storagewars /home/storagewars/Hds/HD1/storage

can’t change user :face_with_raised_eyebrow:

root@raspberrypi:/# cd home/storagewars/Hds/HD1/storage/trash
root@raspberrypi:/home/storagewars/Hds/HD1/storage/trash# ls -lh
total 16K
drwxrwxrwx 5 storagewars storagewars 4.0K May 20  2024 pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
drwxrwxrwx 4 root        root        4.0K May 18  2024 qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
drwxrwxrwx 5 root        root        4.0K May 20  2024 ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
drwxrwxrwx 4 storagewars storagewars 4.0K May 19  2024 v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
root@raspberrypi:/home/storagewars/Hds/HD1/storage/trash# cd ..
root@raspberrypi:/home/storagewars/Hds/HD1/storage# cd blobs
root@raspberrypi:/home/storagewars/Hds/HD1/storage/blobs# ls -lh
total 80K
drwx------ 1026 root root 20K Feb 18  2024 pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
drwx------ 1026 root root 20K Sep  5  2023 qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
drwx------ 1026 root root 20K Aug 30  2023 ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
drwxrwxrwx 1026 root root 20K Aug 31  2023 v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
root@raspberrypi:/home/storagewars/Hds/HD1/storage/blobs#

Do you think that removing root assigned files could make the node comeback online?

A terça, 11/02/2025, 20:27, LxdrJ via Storj Community Forum (official) <noreply@forum.storj.io> escreveu: