After reboot: Failed to add bandwidth usage

Hello,

last night my Host machine had shutdown 3 of my 6 Nodes, due to a RAM issue.

Now I’ve got following situation:

#Node1 works perfect

#Node 2 see Node 3

#Node3 seems to work, but Kuma says it doesn’t

[EDIT: Node 2 and 3 have the same ERRORS]

At first have a look at #Node3

I’ve to mention before the shutdown it worked like a charm, superfast and without errors.

When I open the local dashboard with https://[IP]:14002 the Site needs about 40-60s to load all data. But it says all OK:

But when I open this: http://[IP]:28967 then I get his:
image

Then looking into the logs I see following:

2023-09-06T10:54:22Z    ERROR   piecestore      failed to add bandwidth usage   {"process": "storagenode", "error": "bandwidthdb: database is locked", "errorVerbose": "bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:882\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:243\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}

I’ve allready restarted the docker and the whole VM but nothing helps.

Can you please help me?

This means the databases are fragmented. So acces is slow.

To quickly bring the node online,
Stop node,
move all dbs to backup folder,
start the node,
stop the node
and move all exept the locked one back in place. Then start the node.

Consider moving dbs to ssd and defragmentation if ntfs is used.

Thats normal i think.

1 Like

Please tell about setup and hardware :nerd_face:

Okay, I tried, while the dashboard was much faster loading for short time and the database_locked error is still present:

2023-09-06T13:13:25Z    ERROR   piecestore      failed to add bandwidth usage   {"process": "storagenode", "error": "bandwidthdb: database is locked", "errorVerbose": "bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:882\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:516\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:243\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}

Any help please?

Okay, now I found following:

After removing and deploying the docker container of storagenode again, following is found in the logs:

2023-09-06 13:35:17,609 INFO exited: storagenode (exit status 1; not expected)
2023-09-06 13:35:18,611 INFO spawned: 'storagenode' with pid 39
2023-09-06 13:35:18,612 WARN received SIGQUIT indicating exit request
2023-09-06 13:35:18,613 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-09-06T13:35:18Z    INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2023-09-06 13:35:18,614 INFO stopped: storagenode-updater (exit status 0)
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
-       Tables: []*dbschema.Table{
-               (
-                       s"""
-                       Name: bandwidth_usage
-                       Columns:
-                               Name: action
-                               Type: INTEGER
-                               Nullable: false
-                               Default: ""
-                               Reference: nil
-                               Name: amount
-                               Type: BIGINT
-                               Nullable: false
-                               Default: ""
-                               Reference: nil
-                               Name: created_at
-                               Type: TIMESTAMP
-                               Nullable: false
-                       ... // 12 elided lines
-                       s"""
-               ),
-               (
-                       s"""
-                       Name: bandwidth_usage_rollups
-                       Columns:
-                               Name: action
-                               Type: INTEGER
-                               Nullable: false
-                               Default: ""
-                               Reference: nil
-                               Name: amount
-                               Type: BIGINT
-                               Nullable: false
-                               Default: ""
-                               Reference: nil
-                               Name: interval_start
-                               Type: TIMESTAMP
-                               Nullable: false
-                       ... // 12 elided lines
-                       s"""
-               ),
-       },
+       Tables: nil,
-       Indexes: []*dbschema.Index{
-               s`Index<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: "">`,
-               s`Index<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: "">`,
-       },
+       Indexes:   nil,
        Sequences: nil,
  }

        storj.io/storj/storagenode/storagenodedb.(*DB).preflight:429
        storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:376
        main.cmdRun:110
        main.newRunCmd.func1:32
        storj.io/private/process.cleanup.func1.4:399
        storj.io/private/process.cleanup.func1:417
        github.com/spf13/cobra.(*Command).execute:852
        github.com/spf13/cobra.(*Command).ExecuteC:960
        github.com/spf13/cobra.(*Command).Execute:897
        storj.io/private/process.ExecWithCustomOptions:113
        main.main:30
        runtime.main:250
2023-09-06 13:35:20,149 INFO stopped: storagenode (exit status 1)
2023-09-06 13:35:20,150 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

Tried to repair database “bandwidth.db” following exactly this how toClick

Now This is the actual log output:

2023-09-06T13:48:43Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 5, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:48:56Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:48:56Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:48:57Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:48:58Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:48:59Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:00Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 6, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:28Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:28Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:30Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:31Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:31Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-09-06T13:49:33Z    ERROR   contact:service ping satellite failed   {"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

And this happend:
image

I tried also this how to: https://forum.storj.io/t/my-node-is-offline-for-2-hours/736

But my port is closed. But no IP has changed, no configuration had changed. I know, because I’ve seen that the Port was open for a short time, then suddenly it is since then unreachable…

Other nodes on same host with same settings are working

No I’ve noticed this:

I’ve never done this, where can I change this?

[EDIT] Complete removed docker container and redeployed, than it was back to Port 28987, but still not working…

You did not copy the locked back?

Yes I did not copy that.

Try starting without the dbs. Backup them and then delete the originals.
Check if it is online. You will lose statistics of the dashboard.
Does it work?

You need to re-create this database:

After that you can fix the offline issue by checking port forwarding rule and that you specified the correct address and port in the ADDRESS option.

That was the Output of the integrity check:

./pieceinfo.dbok
./bandwidth.dbok
./heldamount.dbok
./info.dbok
./notifications.dbok
./orders.dbok
./storage_usage.dbok
./used_serial.dbok
./piece_expiration.dbok
./piece_spaced_used.dbok
./pricing.dbok
./reputation.dbok
./satellites.dbok
./secret.dbok

I assume that the ok at the end of each file means, that all checks are OK.
What now?

You need to re-create the database with incorrect schema, not try to fix it.

I tried, but didn’t help, too.

I’m setting up a whole new VW Machine and move the Node there, maybe the crash had broken something on my VM…

I’ll report back…

Okay, this my actual status:

It still says

After setting up a new VM my Port ist open!

I followed exactly the how to above with:
#1 stop Node
#2 move all db’s to backup folder and delete db’s from original folder
#3 start Node
#4 wait for startup
#5 stop Node
#6 copy all db’s from backup to original folder except of “bandwidht.db”

#7 deleted config.yaml and redeployed node

No I’ve got following error:

2023-09-07 07:56:56,395 INFO RPC interface 'supervisor' initialized
2023-09-07 07:56:56,396 INFO supervisord started with pid 1
2023-09-07 07:56:57,398 INFO spawned: 'processes-exit-eventlistener' with pid 51
2023-09-07 07:56:57,400 INFO spawned: 'storagenode' with pid 52
2023-09-07 07:56:57,402 INFO spawned: 'storagenode-updater' with pid 53
2023-09-07T07:56:57Z    INFO    Configuration loaded    {"Process": "storagenode-updater", "Location": "/app/config/config.yaml"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "storage.allocated-disk-space"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "server.private-address"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "contact.external-address"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "healthcheck.enabled"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "console.address"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "healthcheck.details"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.wallet"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "storage.allocated-bandwidth"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "server.address"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.email"}
2023-09-07T07:56:57Z    INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.wallet-features"}
2023-09-07T07:56:57Z    INFO    Anonymized tracing enabled      {"Process": "storagenode-updater"}
2023-09-07T07:56:57Z    INFO    Running on version      {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.86.1"}
2023-09-07T07:56:57Z    INFO    Downloading versions.   {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2023-09-07T07:56:57Z    INFO    Configuration loaded    {"process": "storagenode", "Location": "/app/config/config.yaml"}
2023-09-07T07:56:57Z    INFO    Anonymized tracing enabled      {"process": "storagenode"}
2023-09-07T07:56:57Z    INFO    Operator email  {"process": "storagenode", "Address": "c7h12@outlook.de"}
2023-09-07T07:56:57Z    INFO    Operator wallet {"process": "storagenode", "Address": "0x5819997dB0374942770F9aCaaC0BD2fBAaCE2017"}
2023-09-07T07:56:57Z    INFO    server  kernel support for server-side tcp fast open remains disabled.  {"process": "storagenode"}
2023-09-07T07:56:57Z    INFO    server  enable with: sysctl -w net.ipv4.tcp_fastopen=3  {"process": "storagenode"}
2023-09-07T07:56:57Z    INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.86.1"}
2023-09-07T07:56:57Z    INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode"}
2023-09-07T07:56:57Z    INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.86.1"}
2023-09-07T07:56:57Z    INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode-updater"}
2023-09-07T07:56:58Z    INFO    Telemetry enabled       {"process": "storagenode", "instance ID": "12R1AAypbbFigFKweM7b77ycVhGExeWVQ65iNK7dgacCLJVTzZy"}
2023-09-07T07:56:58Z    INFO    Event collection enabled        {"process": "storagenode", "instance ID": "12R1AAypbbFigFKweM7b77ycVhGExeWVQ65iNK7dgacCLJVTzZy"}
2023-09-07 07:56:58,599 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-07 07:56:58,600 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-07 07:56:58,600 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-07T07:56:59Z    INFO    db.migration    Database Version        {"process": "storagenode", "version": 54}
2023-09-07T07:57:00Z    INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.        {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.    {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    bandwidth       Performing bandwidth usage rollups      {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    Node 12R1AAypbbFigFKweM7b77ycVhGExeWVQ65iNK7dgacCLJVTzZy started        {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    Public server started on [::]:28967     {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    Private server started on 127.0.0.1:7778        {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.   {"process": "storagenode"}
2023-09-07T07:57:00Z    INFO    trust   Scheduling next refresh {"process": "storagenode", "after": "5h43m6.808070169s"}
2023-09-07T07:57:00Z    INFO    pieces:trash    emptying trash started  {"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2023-09-07T07:57:00Z    INFO    pieces:trash    emptying trash started  {"process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2023-09-07T07:57:00Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-09-07T07:57:00Z    INFO    pieces:trash    emptying trash started  {"process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-09-07T07:57:00Z    INFO    piecestore      download started        {"process": "storagenode", "Piece ID": "EIWLKE6W3OFR4K6ES2SR3DPJFCOQW64EYHY5Q6E2LHLQ7SXKJ6JA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 181248, "Remote Address": "72.52.83.202:48714"}
2023-09-07T07:57:00Z    INFO    pieces:trash    emptying trash started  {"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-09-07T07:57:01Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "3NGMJHYATYTWR3K625RV66RINFR7IYVSXKB23MEGWPHZZUS6GFLA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "184.104.224.98:53934"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "2AEN4323CBF5H7CGLYACEARY6I72NQNEVOONIOAZB5RXRE7ZCWTQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "5.161.143.41:19334"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "4H44ZQVQ3MIRP6XWPHZWEUPG5PLFGRGLMTBCAA5ECMYYJO6HHVVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "5.161.128.79:43058"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "S5DECIJ323FKE3QZ67KWFMQ57SY7TRUZ7ZP3I3MAJ3F4JJL6ZBSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "5.161.128.79:42212"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "65ATVS5WXAMKLL7VN6ZR4UO6ZAOJ4GP7TD3LVKKYHXOWSU3BLIKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "5.161.143.41:19520"}
2023-09-07T07:57:01Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "DXIZ3IY2EFJZDMVUNTLQ2YR3INUFP6UQO6RL3XAFYGGK3UWGGYBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1600000000000, "Remote Address": "5.161.128.79:42144"}
2023-09-07T07:57:02Z    INFO    piecestore      download canceled       {"process": "storagenode", "Piece ID": "EIWLKE6W3OFR4K6ES2SR3DPJFCOQW64EYHY5Q6E2LHLQ7SXKJ6JA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 49152, "Remote Address": "72.52.83.202:48714"}
2023-09-07T07:57:02Z    INFO    piecestore      uploaded        {"process": "storagenode", "Piece ID": "S5DECIJ323FKE3QZ67KWFMQ57SY7TRUZ7ZP3I3MAJ3F4JJL6ZBSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 36864, "Remote Address": "5.161.128.79:42212"}
2023-09-07T07:57:02Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "2AEN4323CBF5H7CGLYACEARY6I72NQNEVOONIOAZB5RXRE7ZCWTQ"}
2023-09-07T07:57:02Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "WAF42MBHAFLH5KLRZ7VJXFW2NLPF3FFGX3RIN3IOG3V5ILPDPFSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999523072, "Remote Address": "5.161.143.41:19334"}
2023-09-07T07:57:02Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "65ATVS5WXAMKLL7VN6ZR4UO6ZAOJ4GP7TD3LVKKYHXOWSU3BLIKA"}
2023-09-07T07:57:02Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "G525IHBXMUTYFNRW67J4T2DH6CJY4KLY2ODVEB6PDRVB6DOCWJNA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999523072, "Remote Address": "5.161.143.41:19520"}
2023-09-07T07:57:02Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "XNG7P5XJ7R7KC3PJ34EWSPWHZAOZPVJRJPSTGBNJTQBRUDTAZPVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999518976, "Remote Address": "5.161.109.216:48930"}
2023-09-07T07:57:03Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "3NGMJHYATYTWR3K625RV66RINFR7IYVSXKB23MEGWPHZZUS6GFLA"}
2023-09-07T07:57:03Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "EPA5JKN43ZDKTBWQ3TO3PAOXVLVN5QTDNLDKGVGZWHHZAMCEYBUA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 1599999512832, "Remote Address": "184.104.224.98:53934"}
2023-09-07T07:57:03Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "G525IHBXMUTYFNRW67J4T2DH6CJY4KLY2ODVEB6PDRVB6DOCWJNA"}
2023-09-07T07:57:03Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "XNG7P5XJ7R7KC3PJ34EWSPWHZAOZPVJRJPSTGBNJTQBRUDTAZPVQ"}
2023-09-07T07:57:03Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "SCLFMR5IYDCMNXHLLPXOJ3WJIHIUXKZ7GVUCJHF6XVMVWCJ3HJPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999512832, "Remote Address": "184.104.224.99:53662"}
2023-09-07T07:57:03Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "BWIUOSMFYZ6SGW5YV7SHQD2EGZ5TSV5MUHOSZZQQQRBIIVAUYATQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999512832, "Remote Address": "5.161.143.41:19520"}
2023-09-07T07:57:03Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "WAF42MBHAFLH5KLRZ7VJXFW2NLPF3FFGX3RIN3IOG3V5ILPDPFSQ"}
2023-09-07T07:57:03Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "5CTSUHTJHYGQGEPWTVMUCAJYI2LSHCQDMITTAL6ZALHL5QPOVTQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999508224, "Remote Address": "5.161.143.41:19334"}
2023-09-07T07:57:03Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "TRWXMEQRSKF6UJDHCITETHA5YLCMGBW4BQJWX6DY7R4PZ2DTBEHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999508224, "Remote Address": "5.161.143.41:24500"}
2023-09-07T07:57:03Z    INFO    piecestore      uploaded        {"process": "storagenode", "Piece ID": "DXIZ3IY2EFJZDMVUNTLQ2YR3INUFP6UQO6RL3XAFYGGK3UWGGYBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 36864, "Remote Address": "5.161.128.79:42144"}
2023-09-07T07:57:04Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "BWIUOSMFYZ6SGW5YV7SHQD2EGZ5TSV5MUHOSZZQQQRBIIVAUYATQ"}
2023-09-07T07:57:04Z    INFO    piecestore      uploaded        {"process": "storagenode", "Piece ID": "5CTSUHTJHYGQGEPWTVMUCAJYI2LSHCQDMITTAL6ZALHL5QPOVTQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 4096, "Remote Address": "5.161.143.41:19334"}
2023-09-07T07:57:04Z    INFO    piecestore      uploaded        {"process": "storagenode", "Piece ID": "TRWXMEQRSKF6UJDHCITETHA5YLCMGBW4BQJWX6DY7R4PZ2DTBEHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 3840, "Remote Address": "5.161.143.41:24500"}
2023-09-07T07:57:04Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "EPA5JKN43ZDKTBWQ3TO3PAOXVLVN5QTDNLDKGVGZWHHZAMCEYBUA"}
2023-09-07T07:57:04Z    INFO    piecestore      uploaded        {"process": "storagenode", "Piece ID": "4H44ZQVQ3MIRP6XWPHZWEUPG5PLFGRGLMTBCAA5ECMYYJO6HHVVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 36096, "Remote Address": "5.161.128.79:43058"}
2023-09-07T07:57:04Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
2023-09-07T07:57:04Z    INFO    lazyfilewalker.used-space-filewalker.subprocess used-space-filewalker started   {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
2023-09-07T07:57:05Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "ELM3AYKOM4KBLPONTEKZIV2BYDFQHUMGCPW2B7GEYIY4JSLUSH3A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999208448, "Remote Address": "72.52.83.202:51350"}
2023-09-07T07:57:05Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "ELM3AYKOM4KBLPONTEKZIV2BYDFQHUMGCPW2B7GEYIY4JSLUSH3A"}
2023-09-07T07:57:05Z    INFO    piecestore      upload started  {"process": "storagenode", "Piece ID": "BUZL6DBX7OE5I7HOCRPM6KHGKMYV7IJKTHZLL7E2R3AFYWD6WA3Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1599999206144, "Remote Address": "72.52.83.202:51350"}

Maybe this helps…

This is unlikely. The main point - you need to stop and remove the container, rename this database, move all remained *.db to the some folder, run the container using all your parameters, check logs, that all databases are created and new version of the storagenode is downloaded and started, it also should finish all DB migrations. After that you will stop and remove the container and move databases back with replace. In a result you would have a new database with a correct scheme (only this one), and all your previous databases, so the loss of stat would be minimal. Then run the node with all your parameters and check logs.

And yes, you need not only stop the node, but also remove the container.

I do not see in your logs any message about created databases.
If this is a second run, I would assume, that it should be online, but perhaps you need to use How to remote access the web dashboard - Storj Docs, or try to open a dashboard on the node’s host.

1 Like

As you see in my las post, I can reach the dashboard.
Also with PUBLICIP:28967 it is reachable:

What I tried now is as you mentioned:

And the Node is back running again:
image

In your how-to there is no mention about removing the container.

One remaining question:

I’ve a lot of this in the logs now:

2023-09-07T08:18:46Z    INFO    piecestore      download canceled       {"process": "storagenode", "Piece ID": "CVR6CKSVCUJXNDNHETZSIQAQ4D2EKXUW23JJ2HK27WXLNYTX5UOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 0, "Remote Address": "72.52.83.202:5918"}
2023-09-07T08:18:46Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "X5OYXTQH6QW2TR2AXW2KYHFSBKUNMB6OKLVTJCUE3XS5IAGRQ6UA"}
2023-09-07T08:18:47Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "QCWHPWTR2PHGWT3AXDMEYUUQ2OOYYJ2HCTNECM43SCCDAYBDHAYA"}
2023-09-07T08:18:47Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "A3O2H4BGSKJZC3TVLXHCATGLTL4ZWJGYYZEXCUFIEPWAD6E2CSDA"}
2023-09-07T08:18:47Z    INFO    piecestore      upload canceled (race lost or node shutdown)    {"process": "storagenode", "Piece ID": "35UU76AWEEVUZZDIEDXXF6HYUCODQAPAS4C43ACB7CH2F2HBUBNQ"}

But whats maybe more important is, why is AllHealthy still false?

Also the response is status 503 and UptimeKuma says not reachable, but alle is working well, I think.

These are usual race condition errors, your node was not fast enough for that customer.

In short - your node has some access errors when this endpoint is called, you may check what is it in your logs on the time when you refresh this URL.

You may try to use a different method to request your node’s hostname and port, I do not know what is UptimeKuma uses as a default method (GET, etc.), so you may experiment here.