Release preparation v1.102

New release candidate is already deployed on QA Satellite

Changelog

General

  • 13ce43a jenkins: Upgrade Jenkins CRDB version from v23.1.12 to v23.2.2
  • 51f7ec7 shared/lrucache: import lrucache from storj.io/common
  • 33e4603 Jenkinsfile.verify: drop Build ‘go’ steps
  • 2e7fc7a private/mud: support registering multiple implementations
  • af965b7 go.mod: bump dependencies (uplink/common)
  • 4bf56a4 release v1.102.1-rc

Satellite

  • a0a7ed0 satellite/{web, analytics}: add ‘partnership with Storj’ selection to onboarding
  • c9c7162 web/satellite: use Enter key to Continue in account onboarding steps
  • 78991b2 web/satellite: confirm versioning toggling
  • dab2464 satellite/console: use correct link for zkSync
  • 491175c web/satellite: update upload text and UX
  • 2802708 web/satellite: Show video thumbnail in gallery view
  • b8ec98e satellite/console: add plausible to csp config
  • 8f2a8d0 satellite/metabase: improve listing queries
  • 7b098cf satellite/metabase: test prefix without / suffix
  • 511e840 web/satellite: improved date consistency
  • ada255f web/satellite: Remove free tier notification text
  • 928c902 satellite/console: Update STORJ token upgrade logic
  • 858a576 web/satellite: ui updates
  • 4a5964a satellite/metabase: remove rarely used test util methods
  • 6943fec satellite/{console,db}: add versioning prompt column to projects
  • e4191b6 satellite/metainfo: return permission denied when limits are 0
  • 2f600f9 satellite/repair: cleanup tests helper methods
  • c460d8e satellite/repair: fix segment repairer override configuration
  • 96317db web/satellite: Update member invite hint to be more accurate
  • 816a386 satellite/metabase: introduce a new adapter interface for datasource specific SQL queries
  • 8415273 satellite/metabase: reorganize adapter structure
  • 8beca4d satellite/{console, emails}: updated free trial emails
  • 8ed604a satellite/console: add versioning opt-in endpoint
  • d601509 satellite/metainfo: account for versioning opt in
  • 6594781 satellite/metainfo: remove deprecated ‘Version’ fields
  • 09b171f web/satellite: add versioning opt in dialog
  • 8cc6a9a satellite/{db,accountfreeze}: optimize free trial freeze
  • 89d92f4 satellite/metabase: add more tests for ListObjects
  • d01b20e web/satellite: Hide link for STORJ bonus
  • 0757966 web/satellite: remove misleading 25GB free text
  • daf8f74 web/satellite: update STORJ logos
  • 6073ca7 satellite/metabase: add TestingBatchInsertSegments
  • 95d8a25 satellite/db: add index to trial_expiration on users table
  • 358d06f satellite/{console,web}: allow var users to setup stripe account

Storagenode

  • 8a7b305 storagenode/retain: add more logging to GC filewalker
  • 0f90f06 storagenode/{pieces,blobstore}: save-state-resume feature for GC filewalker
  • 780df77 storagenode/pieces/lazyfilewalker: more logging to GC filewalker
  • 047554d storagenode/pieces/lazyfilewalker: fix gc-filewalker

Test

  • 139e061 testsuite: drop full table scan detection
  • 5b85140 private/mud/mudtest: helper to test any objects with dependencies
3 Likes

What happens if our node was having issues completing a GC. Then a new bloom filter came in - will it start from scratch on the new bloom filter or the old one?
What if it was working on an old one, got a new one then crashed. Will it resume from the new or old one?

Just curious about these edge-cases since it seems like a good number of people are having troubles with GC

3 Likes

save-state-resume feature for GC filewalker
save-state=I managed to complete up to this point so I better save this point
resume=I just restarted, where did I left off? Oh, luckily I saved this point earlier

I think the next one will overwrite the not completed and the node will start from scratch.
This is also mean that your setup is unable to finish it within a week.

I understand that, my question is a bit more nuanced than that. See my next comment

Just to confirm:

  1. If a node is currently processing a bloom filter, and receives a new one. It will stop the current one, and start the new one?
  2. If not, then if a node receives a new bloom filter while doing an old filter and crashes - it will continue (technically start over) from the new bloom filter?
1 Like

Since all the blooms are random (ie don’t always match the exact same files), and they are saved on disk for exactly the situation where they don’t complete, I would assume that the node would work through them one at a time until they are all done.

Edit: I don’t know if the retain concurrency comes into play (ie run multiple blooms at the same time).

I was wrong. It will complete the current bloomfilter and start the new one once it’s done. But if the node restarts before it could complete the current one, the new one will be started from scratch.

1 Like