Release preparation v1.107

New release candidate is already deployed on QA Satellite and ready for testing



  • 5e6a849 private/mud: handle case where an interface type is required
  • bad7139 .editorconfig: add basic indentation style
  • b59ef96 private/lifecycle: condense stack traces for slow shutdown
  • 7fb20bb shared/tagsql: remove sql related dependencies from tagsql.DB
  • d0017e8 release v1.107.0-rc


  • 36dcbdb multinode/multinodedb/dbx: make indentation consistent


  • 0e6814f satellite/metabase: enable several more tests for spanner
  • b74db21 satellite/metabase: DeleteObjectExactVersion for spanner
  • dd17ca2 satellite/metabase: enable tests using DeleteObjectExactVersion
  • 84a900a satellite/metabase: DeletePendingObject for spanner
  • 8639483 satellite/metabase: DeleteObjectsAllVersions for spanner
  • 8a566fc satellite/metabase: DeleteObjectLastCommittedPlain for spanner
  • 345df9d satellite/metabase: DeleteObjectLastCommittedSuspended for spanner
  • 6159fab satellite/metabase: precommitDeleteUnversionedWithNonPending for spanner
  • 628fdb0 satellite/metabase: DeleteObjectLastCommittedVersioned for spanner
  • e3df75d satellite/metabase: enable remaining tests in delete_test.go
  • 25a66b7 satellite/metabase: un-skip all spanner tests needing deletion
  • 76a394f satellite/metabase: FindExpiredObjects for spanner
  • 4d4eefe satellite/metabase: DeleteObjectsAndSegments for spanner
  • ae83e72 satellite/metabase: FindZombieObjects for spanner
  • ab64a0c satellite/metabase: DeleteInactiveObjectsAndSegments for spanner
  • db07906 satellite/metabase: TestingBatchInsertObjects for spanner
  • 9c2bad8 satellite/nodeselection: filterbest selector should be usable multiple times
  • e77aad3 satellite/metabase: enable TestDeleteZombieObjects for spanner
  • b15b0ec satellite/metabase: ListObjects for spanner
  • e71c2f9 satellite/metabase: enable object-listing tests for spanner
  • d785739 satellite/metabase: ListSegments for spanner
  • e8c6e92 satellite/metabase: ListStreamPositions for spanner
  • 471943d satellite/metabase: UpdateSegmentPieces for spanner
  • 8a4b052 satellite/metabase: tagsql.Rows-compatible wrapper for spanner.RowIterator
  • dbde43b satellite/metabase: add new precommit benchmark and commit test
  • ada4e83 satellite/metabase: doNextQueryAllVersionsWithStatus for spanner
  • 6aa103e satellite/metabase: doNextQueryAllVersionsWithStatusAscending for spanner
  • 8548543 satellite/metabase: doNextQueryPendingObjectsByKey for spanner
  • 01100ef satellite/metabase: enable iterator tests on spanner
  • 40c3ea7 satellite/metabase: finalizeInlineObjectCommit for spanner
  • 4cd8b95 satellite/metabase: enable TestCommitInlineObject for spanner
  • d04ea6f satellite/metabase: TestUpdateObjectLastCommittedMetadata for spanner
  • 6c325a0 satellite/metabase: switch back to official spanner client
  • 33e5af3 satellite/satellitedb/dbx: make indentation consistent
  • e95d4e2 satellite/metabase: enable spanner tests by default
  • 602a486 satellite/statellitedb/dbx: update satellitedb fields to not use spanner keywords
  • 568c902 satellite/metabase: add new option for precommit delete
  • 0b049fe satellite/metabase: add another option for precommit delete
  • ae5dc14 satellite/nodeselection: Add test for UnvettedSelector
  • ebf19f0 satellite/metabase: commit, replace iterator queries
  • 6536767 satellite/metabase: copy/move object, replace iterator queries
  • faf3c60 satellite/metabase: remove DeletedSegmentInfo
  • 8890777 satellite/nodeselection: Increase likelihood to select unvetted
  • aed6b85 satellite/metabase: split out CollectBucketTallies for db adapters
  • 0c69718 satellite/metabase: split out DeleteBucketObjects for db adapters
  • e5c2936 satellite/metabase: split out GetStreamPieceCountByNodeID for db adapters
  • d00513f satellite/metabase: split out ListVerifySegments for db adapters
  • 8cb5100 satellite/metabase: split out ListBucketsStreamIDs for db adapters
  • ffbe2df satellite/metabase: GetStreamPieceCountByAlias for spanner
  • b4de116 satellite/metabase: CollectBucketTallies for spanner
  • 450d290 satellite/metabase: DeleteBucketObjects for spanner
  • ec48e81 satellitedb: add db functionality for new rate limits
  • cf7002b satellite/analytics: add hubspot constants to config
  • 7c40ea2 satellite/metabase: ListBucketsStreamIDs for spanner
  • dc2dcfc satellite/metabase: use count from update in deleteSegmentsNotInCommit
  • 3bec9b1 satellite/metabase: simplify deletion queries
  • 6aea815 satellite/metabase: further simplify delete queries
  • 4a84e0b satellite/metabase: simplify delete queries
  • ce6eb42 satellite/metabase: simplify get queries
  • 7a8f34f satellite/metabase: simplify list segments
  • 24ed099 satellite/metabase: simplify precommit queries
  • b8c6f68 satellite/metabase: simplify raw queries
  • ef16cc4 satellite/metabase: simplify update segment pieces query
  • d239d01 satellite/metabase: make EnsureNodeAlias create sequential aliases
  • 0f96c40 satellite/metabase: split out Now for db adapters
  • 7bc4a89 satellite/metabase: Now for spanner
  • d2a8ce0 satellite/metabase: split out Ping for db adapters
  • 9bee7ef satellite/metabase: Ping for spanner
  • 246759f satellite/metabase: add benchmark for NodeAliasMap
  • caeba38 satellite/nodeselection: don’t use go:generate for primes
  • f3f239a satellite/metabase: optimize node alias lookup
  • 7e70239 satellite/console: add feature flag for change email flow
  • 218ba38 satellite/console: verify password and 2fa code steps for change email flow
  • 877a886 satellite/satellitedb: Add generic update limits functionality
  • 44d0b6c satellite/metainfo: Implement operation-specific rate limits
  • c7f5e6b satellite/metabase: enable TestGetStreamPieceCountByNodeID for spanner
  • acdd93a satellite/console: verify old email step for email change flow
  • 50f9458 satellite/metabase: ListVerifySegments for spanner
  • 5a1149c satellite/metabase: fix TestNodeAliases to be the same on both implementations
  • b90ef7d satellite/console: add new email step for email change flow
  • 2d00c3a satellite/console: add verify new email step for email change flow
  • 14d234c web/satellite: update ui/ux for passphrase managed projects
  • f127184 satellite/metabase: reenable TestPrecommitConstraint_Empty for spanner
  • eb72eec satellite/metabase: enable TestNodeAliasCache_DB
  • 7f4c415 satellite/{console, payments}: update Stripe email when completing email change flow
  • 0970ae6 web/satellite: initial templates for change email flow
  • c45cf48 web/satellite: add logic for change email flow
  • 903f783 satellite/console: update contact email in Segment and Hubspot on email change success
  • aa122c0 satellite/console: send email on successful email change
  • 3faa149 satellite/console: feature flag for self-serve account delete
  • 6684973 web/satellite: add limit updates warning message
  • c993197 satellite/metabase: update TODO on GetTableStats for spanner
  • da4d4ce satellite/nodeselection: add if and eq attribute selector helpers
  • 5949789 satellite/metabase: fix UpdateSegmentPieces for spanner
  • 399c13e satellite/{console,web}: bypass captcha for mfa
  • 87fae1e satellite/metabase: add UpdateTableStats() method to DB
  • e3c698a satellite/repair: log more details during repair
  • 075771c satellite/metainfo: remove propagating reduncancy schema in object responses
  • f9293ef satellite/metainfo: placement level RS parameters
  • 48af251 satellite/satellitedb: Add monkit meters verify/reverify
  • 69e2589 satellite/admin: Add functionality for changing new limits
  • 474d05f satellite,private/healthcheck: add healthcheck package
  • 1076a11 satellite/stripe: implement health check
  • fcdf07f satellite/metabase: drop NOT NULL DEFAULT 0 from objects.retention_mode
  • 04d6de9 satellite/accountfreeze: attempt payment before warning
  • 869f350 satellite/metainfo: add config flags for Object Lock features
  • 2ccf6f7 satellite/metainfo: support enabling Object Lock via CreateBucket
  • c843ae1 satellite/metainfo: add endpoint for retrieving bucket’s lock config
  • bec6ca3 satellite/metainfo: prohibit force deleting Object Lock buckets
  • 932d87d satellite/satellitedb/database.go: add spanner connection support to satellitedb database
  • cc5451e satellite/metabase: reenable some spanner tests
  • b0b4437 satellite/{console,db,web}: add versioning opt-in to project settings
  • 15e8dca satellite/console: new endpoint for self-serve account deletion
  • 3473a3e web/satellite: delete account flow implemented
  • b92bbb1 satellite/console: add new endpoint to set custom limits


  • c93eca8 storagenode/storagenodedb: remove unused piece_expirations.trash
  • ea949c9 storagenode/storagenodedb: remove unused piece_expirations.deletion_failed_at
  • 6ddb46d storagenode/storagenodedb: change piece_expirations’s PK to rowid
  • 79e6f4c storagenode/orders/ordersfile: seek to the end instead of O_APPEND
  • 34e493c storagenode/collector: make collector expiration grace period configurable
  • 554b80e storagenode/pieces: fix logging when lazy file walker is disabled
  • 498d99d storagenode/piecestore: fix logged error
  • 7f0b85a storagenode/collector: delete expired pieces if expired count > 0
  • 11c5c47 storagenode/blobstore: switch to use our own FileInfo interface

The updates take too long to reach all nodes. I have many nodes still on 104 ver.

One of mine seems to have updated to 105.4, interestingly enough…

I am about a Month on 105.4 already

Mine updated 2 days ago… I also feel it takes quite a long time. On the other hand - better safe than sorry :slight_smile:

I’m wondering what’s the pace set for update installs? Is there a pace set? like 10 nodes/hour to be updated, or something?
Or the randomness of node ID generation coupled with the cursor system makes it impossible to set a pace so precise?

I believe it’s currently manual and done in tranches. No set rate. That makes some sense as some updates are more dangerous than others. But it’s been really inconsistent lately.


I need to activate the startup piece scan to update my used space, because it is displayed wrong on the dashboard. I need this version to see when the filewalker starts and finishes. Just push the button and update our nodes… I don’t see the point in these delays.

not too fast bacause I activated my startup scans and need days to finish! don’t interrupt pls :slight_smile:


Seems that this last update bring again the correct bandwith

Node with 1.107

Node with 1.105


Please note that this release will not work (at all) on any Windows 7/Windows Server 2008. I just have tested it myself - it fatal crash right at the start even before any errors can be put to the logs.
And most likely won’t work on Windows 8 / Windows Server 2012 either. Which is still officially supported by Storj.
And will require Window 10 / Server 2016 as a very minimum to run.

Because storj start to built from source code using new version of the Go programming language (1.22 it seems ci: bump Go builds to 1.22.2 · storj/storj@14cfbf3 · GitHub, where the breaking point is Go ver >=1.21.5 ), in which compatibility with many old operating systems was completely (and it seems very intentionally) broken. Because even smallest and simplest programs like the output of a single line of text like classic “Hello world!” produce same fatal errors right after start if built with Go ver >=1.21.5: runtime: Applications will not launch in Windows 7 · Issue #64622 · golang/go · GitHub

Same apply to old MacOS versions and not limited to Windows:

As announced in the Go 1.20 release notes, Go 1.21 requires macOS 10.15 Catalina or later; support for previous versions has been discontinued.
Go 1.20 is the last release that will run on macOS 10.13 High Sierra or 10.14 Mojave. Go 1.21 will require macOS 10.15 Catalina or later.

Perhaps this will cause additional large delays with the deployment of storagenode v 1.107+. Because devs will see an increased % of the nodes crash/going offline when updating to it. In fact, it looks like the rollout of updates to v.107 has already been suspended - the position of the “cursor” has not changed for several days.

Perhaps the developers have not yet figured out what the problem is and are looking for a reasons. Well, the hint to one of the reasons is above.


Thank you soo much for bringing this to our attention. Corresponding bugfix should be this one:

As soon as that gets merged we can cherry pick it and restart the storage node rollout. It might take a few days.

Without your report we would have continued the rollout. It takes a bigger number of crashing nodes to get visible on our dashboards.


Linux Nodes seem to work. You can select/identify them and continue rollout ?

Greetings Michael

No that is not possible.

1 Like

Rollout should continue shortly. We will skip 107 and continue with 108.