…
An error occurred: Body is limited to 32000 characters; you entered 73611.
…
I’d like to say that I like you. You put so much effort into making it difficult to help you that I have lots of fun seeing reactions on the forum. Literally I just keep ordering popcorn and watch.
Just try to copy the error from the log instead. Only one line.
I do not know which line you need. This is the first line.
C:\Program Files\Storj\Storage Node\storagenode.log:38579:2024-07-16T11:49:20-04:00 ERROR orders cleaning DB archive
And there should be also an explanation - why. So, from the error you may copy part from the timestamp until the first “\n
”.
From where do I get the timestamp? From what command??
From your log. You wanted to post an error from your log. Each log line start with a timestamp. You need to copy the part of that string from the beginning (where you would see a date and time) until the first \n
symbols and post here.
P.S. Please stop trolling, I will stop trying to help you.
This is from the date to the first “/n”.
I did remove some personal information (email address and walled address) and I do not know if that will cause any issues to what was requested to be posted.
-------------------
2024-08-28T19:56:16-04:00 INFO Configuration loaded {"Location": "C:\\Program Files\\Storj\\Storage Node\\config.yaml"}
2024-08-28T19:56:16-04:00 INFO Anonymized tracing enabled
2024-08-28T19:56:16-04:00 INFO Operator email {"Address":
2024-08-28T19:56:16-04:00 INFO Operator wallet {"Address":
2024-08-28T19:56:57-04:00 INFO Stop/Shutdown request received.
2024-08-28T19:57:06-04:00 ERROR failure during run {"error": "Error opening database on storagenode: database: notifications opening file \"D:\\\\notifications.db\" failed: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabaseWithStat:407\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:384\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:379\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:354\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:319\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "errorVerbose": "Error opening database on storagenode: database: notifications opening file \"D:\\\\notifications.db\" failed: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabaseWithStat:407\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:384\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:379\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:354\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:319\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\tmain.cmdRun:69\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T19:57:06-04:00 FATAL Unrecoverable error {"error": "Error opening database on storagenode: database: notifications opening file \"D:\\\\notifications.db\" failed: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabaseWithStat:407\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:384\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:379\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:354\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:319\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "errorVerbose": "Error opening database on storagenode: database: notifications opening file \"D:\\\\notifications.db\" failed: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabaseWithStat:407\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:384\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:379\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:354\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:319\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\tmain.cmdRun:69\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T19:57:17-04:00 INFO Configuration loaded {"Location": "C:\\Program Files\\Storj\\Storage Node\\config.yaml"}
2024-08-28T19:57:17-04:00 INFO Anonymized tracing enabled
2024-08-28T19:57:17-04:00 INFO Operator email {"Address":
2024-08-28T19:57:17-04:00 INFO Operator wallet {"Address":
2024-08-28T20:00:12-04:00 INFO Telemetry enabled {"instance ID": "12fBg4aEZK3NFKJcwweH779HRcNydc2QvL96h8aSADpe2D7XocC"}
2024-08-28T20:00:12-04:00 INFO Event collection enabled {"instance ID": "12fBg4aEZK3NFKJcwweH779HRcNydc2QvL96h8aSADpe2D7XocC"}
2024-08-28T20:00:12-04:00 INFO db.migration Database Version {"version": 61}
2024-08-28T20:00:28-04:00 INFO preflight:localtime start checking local system clock with trusted satellites' system clock.
2024-08-28T20:00:28-04:00 INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.
2024-08-28T20:00:28-04:00 INFO Node 12fBg4aEZK3NFKJcwweH779HRcNydc2QvL96h8aSADpe2D7XocC started
2024-08-28T20:00:28-04:00 INFO trust Scheduling next refresh {"after": "8h35m35.623356117s"}
2024-08-28T20:00:28-04:00 INFO Public server started on [::]:28967
2024-08-28T20:00:28-04:00 INFO collector expired pieces collection started
2024-08-28T20:00:28-04:00 INFO bandwidth Persisting bandwidth usage cache to db
2024-08-28T20:00:28-04:00 INFO Private server started on 127.0.0.1:7778
2024-08-28T20:00:28-04:00 INFO retain Prepared to run a Retain request. {"cachePath": "C:\\Program Files\\Storj\\Storage Node/retain", "Created Before": "2024-08-19T13:23:28-04:00", "Filter Size": 34764995, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:28-04:00 WARN piecestore:monitor Disk space is less than requested. Allocated space is {"bytes": 9230473047130}
2024-08-28T20:00:28-04:00 INFO pieces:trash emptying trash started {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:28-04:00 INFO pieces used-space-filewalker started {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-28T20:00:28-04:00 INFO lazyfilewalker.used-space-filewalker starting subprocess {"satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-28T20:00:28-04:00 INFO lazyfilewalker.trash-cleanup-filewalker starting subprocess {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:29-04:00 INFO lazyfilewalker.gc-filewalker starting subprocess {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:29-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:29-04:00 INFO lazyfilewalker.used-space-filewalker subprocess started {"satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-28T20:00:29-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2024-08-21T20:00:28-04:00"}
2024-08-28T20:00:29-04:00 INFO lazyfilewalker.gc-filewalker subprocess started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:35-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess Database started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-28T20:00:35-04:00 INFO lazyfilewalker.gc-filewalker.subprocess Database started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-28T20:00:35-04:00 INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-19T13:23:28-04:00", "bloomFilterSize": 34764995}
2024-08-28T20:00:35-04:00 INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Process": "storagenode"}
2024-08-28T20:00:35-04:00 INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {"count": 23}
2024-08-28T20:00:35-04:00 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {"count": 15489}
2024-08-28T20:00:35-04:00 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {"count": 406}
2024-08-28T20:00:35-04:00 INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {"count": 2741}
2024-08-28T20:00:35-04:00 INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2024-08-28T20:00:36-04:00 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2024-08-28T20:00:36-04:00 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2024-08-28T20:00:37-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker completed {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-08-28T20:00:40-04:00 INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2024-08-28T20:00:45-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-28T20:00:45-04:00 INFO pieces:trash emptying trash finished {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "16.1698044s"}
2024-08-28T20:00:45-04:00 INFO pieces:trash emptying trash started {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-28T20:00:45-04:00 INFO lazyfilewalker.trash-cleanup-filewalker starting subprocess {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-28T20:00:45-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess started {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-28T20:00:45-04:00 INFO piecestore download started {"Piece ID": "E45J3YF7JB2OVKAWJUKLMQZKQG5VYL5FL3Z2AKEQ5DREJ6ZQF26Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 9216, "Remote Address": "162.55.54.2:14351"}
2024-08-28T20:00:45-04:00 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {"count": 222}
2024-08-28T20:00:45-04:00 INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {"count": 1519}
2024-08-28T20:00:45-04:00 INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {"count": 8}
2024-08-28T20:00:45-04:00 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {"count": 7347}
2024-08-28T20:00:45-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker started {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "dateBefore": "2024-08-21T20:00:45-04:00"}
2024-08-28T20:00:45-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess Database started {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-08-28T20:00:45-04:00 INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2024-08-28T20:00:46-04:00 INFO lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker completed {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-08-28T20:00:46-04:00 INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2024-08-28T20:00:46-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "79.127.226.98:34010"}
2024-08-28T20:00:47-04:00 INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2024-08-28T20:00:47-04:00 INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2024-08-28T20:00:49-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "79.127.226.97:52106"}
2024-08-28T20:00:49-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "79.127.219.36:47024"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-28T20:00:54-04:00 INFO pieces:trash emptying trash finished {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "9.332193s"}
2024-08-28T20:00:54-04:00 INFO pieces:trash emptying trash started {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.trash-cleanup-filewalker starting subprocess {"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess started {"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-28T20:00:54-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "79.127.226.98:44676"}
2024-08-28T20:00:54-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "79.127.219.34:47458"}
2024-08-28T20:00:54-04:00 INFO piecestore download started {"Piece ID": "4OXLQHLOQQDGTJYXIY6K6VG7AEHXJF5OCEAHDNS64RAMKQTEITAQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2304, "Remote Address": "109.61.92.66:37224"}
2024-08-28T20:00:54-04:00 INFO piecestore download started {"Piece ID": "TJOMHQ7UCJMJFK7KMFL76MWERL2ZUTAHHKAMEPHBWNPSRHQ2H43Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 362752, "Remote Address": "5.161.109.216:40604"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.gc-filewalker.subprocess Got a signal from the OS: "terminated" {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.used-space-filewalker.subprocess Got a signal from the OS: "terminated" {"satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Process": "storagenode"}
2024-08-28T20:00:54-04:00 INFO Got a signal from the OS: "terminated"
2024-08-28T20:00:54-04:00 ERROR nodestats:cache Get pricing-model/join date failed {"error": "context canceled"}
2024-08-28T20:00:54-04:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess exited with status {"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "status": 1, "error": "exit status 1"}
2024-08-28T20:00:54-04:00 ERROR pieces:trash emptying trash failed {"error": "pieces error: lazyfilewalker: exit status 1", "errorVerbose": "pieces error: lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:433\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-08-28T20:00:59-04:00 ERROR collector error during expired pieces collection {"count": 0, "error": "pieces error: context canceled", "errorVerbose": "pieces error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:611\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T20:00:59-04:00 ERROR collector error during collecting pieces: {"error": "pieces error: context canceled", "errorVerbose": "pieces error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:611\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T20:00:59-04:00 INFO lazyfilewalker.gc-filewalker subprocess exited with status {"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "status": 1, "error": "exit status 1"}
2024-08-28T20:00:59-04:00 ERROR pieces lazyfilewalker failed {"error": "lazyfilewalker: exit status 1", "errorVerbose": "lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkSatellitePiecesToTrash:160\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:572\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:379\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:265\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T20:00:59-04:00 ERROR filewalker failed to get progress from database
2024-08-28T20:01:01-04:00 INFO lazyfilewalker.used-space-filewalker subprocess exited with status {"satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "status": 1, "error": "exit status 1"}
2024-08-28T20:01:04-04:00 ERROR pieces used-space-filewalker failed {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Lazy File Walker": true, "error": "lazyfilewalker: exit status 1", "errorVerbose": "lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:719\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:81\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-28T20:01:04-04:00 INFO Stop/Shutdown request received.
2024-08-28T20:01:09-04:00 WARN servers service takes long to shutdown {"name": "server"}
2024-08-28T20:01:09-04:00 WARN services service takes long to shutdown {"name": "retain"}
2024-08-28T20:01:09-04:00 WARN services service takes long to shutdown {"name": "piecestore:cache"}
2024-08-28T20:01:09-04:00 INFO services slow shutdown {"stack": "goroutine 1103\n\tstorj.io/storj/private/lifecycle.(*Group).logStackTrace.func1:107\n\tsync.(*Once).doSlow:74\n\tsync.(*Once).Do:65\n\tstorj.io/storj/private/lifecycle.(*Group).logStackTrace:104\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:77\n\truntime/pprof.Do:44\n\ngoroutine 1\n\tsyscall.SyscallN:557\n\tsyscall.Syscall:495\n\tgolang.org/x/sys/windows.StartServiceCtrlDispatcher:1363\n\tgolang.org/x/sys/windows/svc.Run:294\n\tmain.startAsService:62\n\tmain.main:26\n\ngoroutine 17\n\tgolang.org/x/sys/windows/svc.serviceMain:246\n\ngoroutine 5\n\tgo.opencensus.io/stats/view.(*worker).start:292\n\ngoroutine 6\n\tgithub.com/golang/glog.(*fileSink).flushDaemon:351\n\ngoroutine 50\n\truntime.gopark:381\n\truntime.block:103\n\truntime.ctrlHandler:1205\n\truntime.call16:728\n\truntime.callbackWrap:396\n\truntime.cgocallbackg1:315\n\truntime.cgocallbackg:234\n\truntime.cgocallbackg:1\n\truntime.cgocallback:998\n\truntime.goexit:1598\n\ngoroutine 51\n\tsync.runtime_Semacquire:62\n\tsync.(*WaitGroup).Wait:116\n\tgolang.org/x/sync/errgroup.(*Group).Wait:56\n\tmain.(*service).Execute.func2:110\n\tmain.(*service).Execute:142\n\tgolang.org/x/sys/windows/svc.serviceMain.func2:234\n\ngoroutine 52\n\tsync.runtime_Semacquire:62\n\tsync.(*WaitGroup).Wait:116\n\tgolang.org/x/sync/errgroup.(*Group).Wait:56\n\tstorj.io/storj/storagenode.(*Peer).Run:981\n\tmain.cmdRun:125\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 53\n\tstorj.io/monkit-jaeger.(*ThriftCollector).Run:174\n\tstorj.io/common/process.cleanup.func1.2:351\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 289\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 54\n\tsync.runtime_Semacquire:62\n\tsync.(*WaitGroup).Wait:116\n\tgolang.org/x/sync/errgroup.(*Group).Wait:56\n\tstorj.io/common/debug.(*Server).Run:205\n\tstorj.io/common/process.initDebug.func1:40\n\ngoroutine 55\n\tos/signal.signal_recv:152\n\tos/signal.loop:23\n\ngoroutine 67\n\tstorj.io/common/debug.(*Server).Run.func3:184\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 68\n\tstorj.io/drpc/drpcmigrate.(*ListenMux).Run:90\n\tstorj.io/common/debug.(*Server).Run.func4:188\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 69\n\tstorj.io/drpc/drpcmigrate.(*listener).Accept:37\n\tnet/http.(*Server).Serve:3059\n\tstorj.io/common/debug.(*Server).Run.func5:197\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 28\n\tstorj.io/drpc/drpcmigrate.(*ListenMux).monitorContext:106\n\ngoroutine 29\n\tinternal/poll.runtime_pollWait:306\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.execIO:175\n\tinternal/poll.(*FD).acceptOne:936\n\tinternal/poll.(*FD).Accept:970\n\tnet.(*netFD).accept:139\n\tnet.(*TCPListener).accept:148\n\tnet.(*TCPListener).Accept:297\n\tstorj.io/drpc/drpcmigrate.(*ListenMux).monitorBase:115\n\ngoroutine 1104\n\tsync.runtime_Semacquire:62\n\tsync.(*WaitGroup).Wait:116\n\tgolang.org/x/sync/errgroup.(*Group).Wait:56\n\tstorj.io/storj/storagenode/retain.(*Service).Run:282\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 1100\n\tsync.runtime_Semacquire:62\n\tsync.(*WaitGroup).Wait:116\n\tgolang.org/x/sync/errgroup.(*Group).Wait:56\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:130\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 1144\n\tsyscall.SyscallN:557\n\tsyscall.Syscall9:507\n\tgolang.org/x/sys/windows.CreateFile:1739\n\tstorj.io/storj/storagenode/blobstore/filestore.openFileReadOnly:104\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:352\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 1025\n\tsyscall.SyscallN:557\n\tsyscall.Syscall:495\n\tsyscall.findNextFile1:625\n\tsyscall.FindNextFile:1144\n\tos.(*File).readdir:31\n\tos.(*File).Readdirnames:70\n\tstorj.io/storj/storagenode/blobstore/filestore.readAllDirNames:919\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).walkNamespaceUnderPath:868\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).walkNamespaceInPath:864\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).WalkNamespace:857\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).WalkNamespace:328\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:54\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:728\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:81\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 1099\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tsync.(*Once).doSlow:70\n\tsync.(*Once).Do:65\n\tstorj.io/storj/private/lifecycle.(*Group).logStackTrace:104\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:77\n\truntime/pprof.Do:44\n\ngoroutine 412\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 106\n\tinternal/poll.runtime_pollWait:306\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.execIO:175\n\tinternal/poll.(*FD).Read:436\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:183\n\tcrypto/tls.(*atLeastReader).Read:788\n\tbytes.(*Buffer).ReadFrom:202\n\tcrypto/tls.(*Conn).readFromUntil:810\n\tcrypto/tls.(*Conn).readRecordOrCCS:617\n\tcrypto/tls.(*Conn).readRecord:583\n\tcrypto/tls.(*Conn).Read:1316\n\tbufio.(*Reader).Read:237\n\tio.ReadAtLeast:332\n\tio.ReadFull:351\n\tnet/http.http2readFrameHeader:1567\n\tnet/http.(*http2Framer).ReadFrame:1831\n\tnet/http.(*http2clientConnReadLoop).run:9250\n\tnet/http.(*http2ClientConn).readLoop:9145\n\ngoroutine 45\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 46\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 419\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 420\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 240\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 239\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 238\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 237\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 236\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 235\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 234\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 233\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 231\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 232\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 230\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 317\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 229\n\tdatabase/sql.(*DB).connectionOpener:1218\n\ngoroutine 228\n\tdatabase/sql.(*DB).connectionCleaner:1061\n\ngoroutine 1747\n\tinternal/poll.runtime_pollWait:306\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.execIO:175\n\tinternal/poll.(*FD).Read:436\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:183\n\tio.ReadAtLeast:332\n\tio.ReadFull:351\n\tgithub.com/jtolio/n
@support27 please follow these exact steps and all of your issues will be resolved:
Steps to follow:
- Press the ` key on your keyboard. It is located to the left of the number 1 on top of your QWERTY keys, and just above the Tab key. If that makes it difficult to locate it, you can find it just below the Esc key (top left) of your keyboard.
- Press the ` key once more.
- Press the ` key once more. You should have three (3) ` in your reply, like so: ```
- Press the Enter key. It is the large key in the third row of the keyboard (counting from the first row towards you), at the right of the ' key.
- Select the log you want to copy from your log. You can do that by clicking (using the left button on your mouse, or trackpad) just to the left of the first letter in the first line, and without releasing that button move the mouse to the last letter of the last line. You should see the text turning into a blue background as you select more and more letters.
- Press Ctrl key (it is in the bottom left of your keyboard, unless you are using a Lenovo Thinkpad laptop and it is the button to the right of the Fn key at the bottom left of your keyboard).
- Without releasing the Ctrl key you pressed in step 6, press the C button on your keyboard. You can find it on the second row of the keyboard, counting from the first row towards you, just above the large unlabelled key.
- Click just under the three (3) ``` that you have in your reply.
- Press the Ctrl button again (before pressing it, find it using the directions in step 6)
- Without releasing the Ctrl button, press the V key. You can find it in the second row of the keyboard, counting from the first row towards you, just above the large unlabelled key.
- You should now see the three (3) ``` and your log just below them.
- Using your mouse, click to the right of the last letter of the last line in your log.
- Press the Enter key. You can use the directions in step 4 to locate it.
- Press the ` key again. You can use the directions in step 1 to locate it.
- Press the ` key again.
- Press the ` key again. You should have three (3) ``` in the last line of your reply.
- This is the most critical step. If you fail to follow this, the universe will implode, so be EXTREMELY careful: Using your mouse, left click by pressing the left button on it, on the Blue reply button just below your text.
Further: Dashboard returned, but now displaying this and all the scores for suspensin & audit are at 0%.
Seems you didn’t move databases to the system drive or SSD and your disk is too slow to respond to allow to start the node.
Please move databases to SSD:
Then monitor your logs for other Unrecoverable or FATAL errors.
Meaning that your node is offline, as you may notice the indication in the left upper corner of your dashboard.
There are several reasons:
- The service is restarting or stopped
- Your external address and/or port are wrong or the address does not match your WAN IP on your router and/or IP on Open Port Check Tool - Test Port Forwarding on Your Router.
2.1. Make sure that the WAN IP on your router matches the IP on the site above, otherwise the port forwarding rule on your router will not work.
2.2. You need to update thecontact.external-address:
with the correct external address and port in the following format:address:port
2.3. Save the config and restart the node - You have a difference between the WAN IP and the IP on yougetsignal site. In that case you need to contact your ISP to enable the public IP, it could be dynamic, but must be a public (the WAN IP will match the IP from yougetsignal).
3.1. In the case of a static public IP you may use it directly in thecontact.external-address:
parameter
3.2. In the case of a dynamic public IP you can register an own DDNS hostname and use it instead of IP address in thecontact.external-address:
parameter. - Your local IP of your PC may changed, but the router still has the old IP in the port forwarding rule, then you need to update the port forwarding rule on your router to use the current local IP of your PC.