Release preparation v1.108

I’m monitoring logs closely - no fatal logs at all.

These errors happen about 5 to 20 times a day. Nodes are getting ingress around 100 MBit/sec. So almost all other uploads are completely fine. For me this looks like a race condition or like a client error.

It’s possible, however, this one database shouldn’t be locked, I think. Do you only have the “context canceled” errors (this is equal to either the client canceled the process, or the medium cannot keep up and throws timeouts).
By the way, do you have a “database is locked” errors?

No database locked errors - never had problems with this. Database was always placed on fast nvme drive.

Since v1.107 i have these other types of errors: EOF errors since update to 1.107.3 - but these happen only 2 to 6 times a day.

these errors are related to PUT/GET, so the usual long tail cancelation errors (your node loose the race).

Yes, it’s not stable yet. It’s added only for start testing measurement. That’s why there was not an announcement. But it’s safe to turn it off and/or delete cache directory any time.

And thanks for the report. I will try to reproduce it…

2 Likes

Forgot to update, but filewalker done and the filestatcache directory size is around 3.3GB for ~11TB or ~50M pieces.

Now we just need a way to quickly move pieces and empty the trash…

It would be best not to move the piece at all.

Why not delay bloom filter processing by 7 days and then just remove the files right away? The process to cancel the trashing in case of a bug or something would just remove the bloom filter instead.

I realize the bugs etc that this is designed to protect from might not be found in time though because the pieces would still be available and nobody could tell the difference between a piece to keep and a piece that will eventually be deleted… I suppose the node could check the bloom filter for each request, but that adds even more complexity.

It would be possible to achieve, but perhaps not desirable.

That would be almost no performance gain. Garbage collection would still take time. We might just cut off the time that the trash cleanup job consumes but that isn’t much compared to garbage collection runtime.

Yeah, not to mention the added complexity and risk introduced in a subsystem designed to reduce risk…

It was just an idea, I never said it was a good one :upside_down_face:

With that said, one of my first nodes with 108 is soon finished with its first used-space-filewalker. Cache dir is 227M (the node is quite small @ around 900G). So far so good with this one, will report if I see any trouble.

This might have been brought up before, but maybe an idea for @elek . As I understand badger is just a key value store, so during garbage collection, we can create a new store for each piece that should be trashed. Maybe it maps prefixes to a list of pieces or something. Then during trash cleanup, it just walks over the store and deletes the pieces. We’d save all the IOPs of moving pieces.

GC would probably need a way to check all these GC KV stores or else the walk may take some time as pieces were not moved.

1 Like

5 posts were split to a new topic: I am running my node v1.108.3 in docker and want to configure the disk space by config.yaml

Please note, this version has a databases migration, and it could take a few hours (even on SSD) to migrate. Please, do not kill your node during the process, otherwise you may end with a malformed database.

And the result of killing of the “hanging” node could be like that:

I can confirm that the new version did crash my Windows node but went flawlessly on Linux node.

2024-07-25T04:35:07Z	ERROR	failure during run	{"Process": "storagenode", "error": "Error migrating tables for database on storagenode: migrate: database: piece_expiration opening file \"dbs/piece_expiration.db\" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:402\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func5:1151\n\tstorj.io/storj/private/migrate.(*Migration).Run:167\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271", "errorVerbose": "Error migrating tables for database on storagenode: migrate: database: piece_expiration opening file \"dbs/piece_expiration.db\" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:402\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func5:1151\n\tstorj.io/storj/private/migrate.(*Migration).Run:167\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271\n\tmain.cmdRun:102\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271"}
2024-07-25T04:35:07Z	FATAL	Unrecoverable error	{"Process": "storagenode", "error": "Error migrating tables for database on storagenode: migrate: database: piece_expiration opening file \"dbs/piece_expiration.db\" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:402\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func5:1151\n\tstorj.io/storj/private/migrate.(*Migration).Run:167\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271", "errorVerbose": "Error migrating tables for database on storagenode: migrate: database: piece_expiration opening file \"dbs/piece_expiration.db\" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:402\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Migration.func5:1151\n\tstorj.io/storj/private/migrate.(*Migration).Run:167\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271\n\tmain.cmdRun:102\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271"}

It turns out the piece_expiration.db was malformed. It was fixed and the node was started again leading to :point_down:

2024-07-25T07:27:14Z ERROR failure during run   { "error": "Error migrating tables for database on storagenode: migrate: v59: no such index: idx_piece_expirations_deletion_failed_at\n\tstorj.io/storj/private/migrate.(*Migration).Run:212\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271", "errorVerbose": "Error migrating tables for database on storagenode: migrate: v59: no such index: idx_piece_expirations_deletion_failed_at\n\tstorj.io/storj/private/migrate.(*Migration).Run:212\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:425\n\tmain.cmdRun:100\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271\n\tmain.cmdRun:102\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271"}

Why isn’t it done atomically? Node does not need to crash to stop the process, it can be restated by an updater. What will happen then?

It shouldn’t be restarted by the updater. But I guess, the normal termination with a 300 seconds timeout should allow it to finish gracefully or at least rollback after the restart. The migration usually done in a transaction. However, I believe it’s a fragile process anyway.

1 Like

Looks like 1.108 hit 100% last night… so everyone should have deletions accounted for properly now?

Can someone please summarize the current situation regarding used-space?
So:

  • if I have 1.108.3 software and the used-space-fw run fully with this version, then can I disable start-up used-space scan?
  • Will the TTL deletions update the used-space?
  • Will the trash/GC deletions update the used-space?

I set the storage2.max-concurrent-requests: 5 in order to finish quicker with the filewalkers, but as these are done now, I would try raise it to have more ingress (but of course without full re-scan by each change).

From all I’ve read 108 is supposed to update the node DB after any deletions. But I think the only time a node checks what its real usage is… is the startup used-space-filewalker. If that never runs then it’s using usage from the DB that’s constantly getting tweaked with ingress+deletions.

If those ingress+deletion updates are perfect, including failed uploads, including maybe the node getting shut down during trash/GC processing… then you shouldn’t need the startup scan. But I think a prudent SNO would let it run (every few weeks after an upgrade) to ensure real-used-space and DB-used-space never drift too far apart.

Yes.

Yes.

Yes and No. The next fix you will need is storagenode/pieces: run cache persistence loop concurrently with the … · storj/storj@74e3afe · GitHub

I believe without that the numbers on the dashboard will still be incorrect the longer the used space filewalker is running. I am not running the used space filewalker on my nodes so I can’t verify that. Furthermore I believe the moment the used space filewalker is finished that offset should stay the same and doesn’t get more incorrect over time. So you could still disable the used space filewalker and accept the offset until v1.109 is out.

Edit: The follow up question would be why run the used space filewalker now. Sure it will reduce the offset that the TTL deletes caused but if you have enough free space you can as well accept even that higher offset and wait for v1.109 with no used space filewalker execution now.

2 Likes

Thank you, everything is clear now.

I had to run the used-space filewalker together with the 1.108 update as my nodes were full according to the node, but in reality half of the HDD was empty (due to TTL deletions).

2 Likes