Release preparation v1.110

New release candidate v1.110 is already deployed on QA Satellite

Changelog

General

  • 58145be cmd/tools/piecestore-benchmark: implement cancel method on stream
  • 043ff69 all: use pb funcs instead of proto
  • 99a9962 ci: add spanner to main branch tests
  • 1fef010 Jenkinsfile: disable signup activation for tests
  • 8c8c320 Jenkinsfile.premerge: optimize execution time
  • 2839660 Jenkinsfile: one more fix for Rolling Upgrade Test
  • 2bf9ead build: ignore spanner for benchmark tests
  • ba790fc cmd/tools/filewalker-benchmark: add steps to benchmark
  • 51f8319 Jenkinsfile.public: fix backwards compatibility tests (#7079)
  • 742fd73 release v1.110.0-rc

Satellite

  • 341d30c satellite/audit: Delete unused configuration field
  • 1a4fb8a satellite/satellitedb/dbx: regenerate
  • 5082a48 web/satellite: replace mdi icons with lucide icons
  • 82fd5c8 satellite/satellitedb: Enable Spanner testing for getting a batch of peer identities
  • 8e9172a web/satellite: fix hidden text field labels
  • 89945ec web/satellite: change date range component
  • 2932249 web/satellite: conditionally hide encryption card
  • fc6917f satellite/metabase: on Spanner put application name as a session label
  • 085e85e satellite/console: added a feature flag for domains page
  • aa21e4b satellite/{web,console}: bump paid tier limits
  • 4e6ddee web/satellite: Fix forgot password typo
  • 3b877e2 web/satellite: added HTML templates for Domains page
  • 3bb01b3 satellite: fix tags of ‘checkin’ eventkit event
  • a63a50a web/satellite: fix disabled date range picker
  • d4e20ff satellite/satellitedb: fix Spanner migration using batch ddl migration
  • 3bd94df satellite/metabase: prevent CommitObject from deleting locked objects
  • d34d000 web/satellite: restore previous object version
  • 59d8795 satellite/metabase: prevent committing objects with retention and TTL
  • 404eede satellite/metainfo: prevent CommitObject from deleting locked objects
  • d3687ae satellite/console: Update “signup activation code” default
  • a39dcc6 satellite/metabase: allow setting retention via CommitInlineObject
  • 47ef5a2 satellite/metabase: prevent CommitInlineObject from deleting locked objects
  • 9566352 satellite/metainfo: allow setting retention via CommitInlineObject
  • 697d42d satellite/metabase: add methods for retrieving an object’s retention
  • 8b607f4 satellite/metainfo: add GetObjectRetention endpoint
  • ac9e4a6 web/satellite: update usage graphs
  • eae718d satellite/{web,console,payments}: handle stripe issues
  • a0a8582 web/satellite: add logic for new domain flow
  • 2c8a970 satellite/repair/checker: report on number of segments needing repair due to forcing (such as out of placement)
  • 41f62d4 satellite/overlay: remove unused SelectStorageNodes from overlay.DB
  • b290980 satellite/metabase: add methods for setting an object’s retention
  • f9d9cd8 satellite/metainfo: add SetObjectRetention endpoint
  • 4722af4 satellite/metabase: make GetObject methods return retention
  • 6a2df9e satellite/metainfo: allow ValidateAuthAny to receive optional perms
  • 687f6c0 satellite/metabase: make GetObject, DownloadObject return retention
  • d93565f satellite/metabase: remove DeleteObjectsAllVersions method
  • 2559e8d satellite/metabase: make DeleteObjectLastCommittedSuspended respect retention
  • b5de5a0 satellite/metabase: make DeleteObjectLastCommittedPlain respect retention
  • 173bf5d satellite/metabase: adjust copying objects for Object Lock
  • 88f7463 satellite/metabase: make DeleteObjectExactVersion respect retention
  • 4e64892 satellite/metabase/metabasetest: add CreateObjectWithRetention method
  • c9cb711 satellite/metainfo: adjust CopyObject endpoints for Object Lock
  • ffef3bc satellite/console: add internal linksharing url to csp
  • 1129c83 web/satellite: add estimation to upload duration label
  • 3f8808a web/satellite: classify aif files as audio
  • c6cbc7d web/satellite: add input to specify other use case
  • 09df2da satellite/metainfo: make DeleteObject respect retention
  • 012f83c web/satellite: Fix onboarding use case options for personal
  • 6e98aa0 web/satellite: add final step to create bucket dialog
  • 77d97b1 satellite/metainfo: allow object lock if versioning is enabled
  • aa0a342 satellite/console: add project config for object lock
  • 6b8c95d satellite/admin: Change HTTP method set project geofence
  • 6a88d12 satellite/admin: Document & Add to UI project geofence
  • 9a32d51 satellite/satellitedb: support Spanner with revocation query
  • faf377b satellite/metabase: create metabase schema for testplanet based tests
  • 2899f0a satellite/{console, web}: add endpoint to check DNS records
  • 2d6c5b1 satellite/console: create object lock version api keys
  • 36e722d satellite/metainfo: ensure nonzero version in object retention tests
  • ad269b0 satellite/{console, web}: feature flag for active sessions view
  • 19e0e7c satellite/{web, console}: alternative object browser pagination
  • 58418e4 satellite/payments/storjscan/chore: add debug log for new payments
  • bd6c421 web/satellite: design updates
  • 38f5bcb satellite/durability: histogram based durability report

Storagenode

  • 7e6d3ed storagenode/blobstore/filestore: optimize refToTrashPath and refToDirPath
  • 32b90fb storagenode/pieces: update used-space cache after individual satellite scan
  • 48320ae storagenode/pieces: try trash only one storage format
  • ab94a72 storagenode/blobstore/filestore: create trash dir only if doesn’t exist
  • ce49e55 storagenode/{pieces,storagenodedb}: ensure used space total excludes removed satellites
  • b8c9925 storagenode/{collector,pieces}: batch up collection of expired pieces
  • 127a19f storagenode/pieces/lazyfilewalker: batch trash piece requests from lazywalker
  • 83ccc6c storagenode/storagenodedb: optimize monkit on DeleteExpirationsBatch
  • ef7e03e storagenode/pieces: remove monkit which affects performance

Test

  • 985c06d testsuite/playwright-ui: Fix flakiness in upload/download
6 Likes

Welcome back @Andrii :slight_smile: We have missed you.

Excited about :point_up:

4 Likes

Hello @Andrii , long time no see. Excited about this one :slight_smile:

Does this update solve a space usage discrepancy?
There is a lot of mismatched capacity (14tb->6tb), so a version update was performed during the process of checking the mismatched capacity, and the inconsistent state continues to be maintained. (1.108 → 1.109 was updated in less than a week )

Thank you to the engineers who work hard for STORJ :slight_smile:

1 Like

Partially yes. To my knowledge v1.109 should make it so that a full run of the used space filewalker should make it so that the numbers on the dashboard line up with the space used on disk. In v1.110 I see some additional improvements but for most nodes v1.109 should do the trick already.

There still seems to be an unknown issue that makes it so that the pieces stay on disk way longer than they should. That one is still unknown and not going to get fixed in v1.110 and most likely also not in v1.111 because sprint cycles are short and we are talking about a bit over one week until the next release gets prepared.

6 Likes

Are you aware about this 1.109.2 apparently not updating trash stats · Issue #7077 · storj/storj (github.com), where user is reporting this is still a problem in 1.110.0-rc?

2 Likes

Just a general question: How long time does it take from these release preparation posts to go on the forum, till the first non-qa nodes starts getting updates? :slight_smile:

No I was on vacation and missed that bug. I have forwarded it to the team and it looks like we can fix it with v1.111.

Release cut is next week Wednesday evening or the next morning. Later on Thurdsday it usually gets deployed on the QA satellite and storage nodes. Thursday and Friday we run tests on the QA satellite and the nodes. If everything works out the deployment would start on the following week Monday. Sometimes we have to postpone it by 1 or 2 days in case there are some code changes that we don’t feel comfortable with and want to run additional tests or we found some issues and have to wait for the cherry picks.

This is just the satellite deployment. Once that is done the storage node rollout might or might not start. In some releases there are some great improvements and we try to start the storage node rollout almost side by side with the satellite deployment. Other releases have almost no changes and the rollout gets a lower priority. At best the rollout still takes about a week from 0 to 100%. And again if something goes wrong we would halt the rollout, wait for the fix, continue the rollout with a few days later.

Edit: This cycle repeats every 2 weeks. It might get shifted by a week or we skip a release around Christmas or in case of other sprint interrupting events like an all company week.

9 Likes

Thank you for the explanation, and I’m glad you went on a well-deserved holiday.

Kind regards.

I thin this soon will affect performance over all, i see it hitting my already, as it show lot of nodes full.

@littleskunk
I have 0.9 TB node. It has 0.93TB on disk and it shown full.


In reality there is 118 GB of trash files.
I made restart and made full run of used space file walker with budger cache.
All 4 file walkers are successful.
Result:

this mean that even filewalker not count trash.

2 Likes

Did you fixed trash in 1.110.2
I see that my total space started slowly go down and trash started slowly rize. All total over nodes.

1 Like



this 2 pictures show me different story, before update to 1.100.2 number of trash did not rized at all, only gone down, now it rizing, so something changed.

Do you have the startup filewalker enabled?
Because if yes, and it is running on the nodes after the restart/update, then it will count the pieces that were moved to the trash before - the pieces that were not counted as trash during the GC because of that bug.

v1.110.3 is now suggested rollout, with v1.109.2 as minimum.

v1.110.3 should fix the garbage collection issue not updating trrash.

  • c7d1fb1 storagenode/pieces: update trash size on retain
    I hope so.
    Storj Team Big Thanks.
3 Likes

not that full used space filewalker don’t fix and sync things when its done,
but can it be finally run without worry, that is the question

Does current version finally remembers progress, or forget everything at interruption, that is the question.

(because for sure many nodes won’t be able to finish full run in one take, therefore to bother to run it, or yet need we wait for future developments?)

1 Like

Doesnt look like it. https://review.dev.storj.io/c/storj/storj/+/12806?tab=comments