Release preparation v1.120

New release candidate is already deployed on QA Satellite

Changelog

General

  • edddab4 ci: bump to Go 1.23.4
  • 026325c mud/implementation: remove default optional for mud.Implementation
  • c08e9ea modular/monkit: helper to print out all monkit metrics on exit
  • b9d4b0e modular/selector: selector should fail with wrong components
  • 6810282 all: bump monkit
  • 30b94fd Jenkinsfile.public: align Tests stage with the same stage from verify
  • 3c9c763 ci: increase lint timeout and show per linter stats
  • bdd8c4f cmd/tools/convert-node-id: include filestore.PathEncoding
  • 0b0324b go.mod: bump storj.io/common dependency
  • 66850f0 go.mod: bump storj.io/common
  • 89a9578 Jenkinsfile: move build/push images stage before windows installer
  • 4ea172a Jenkinsfile.verify: add more emulator instances
  • 750b936 private/mud: move mud to shared/
  • 5523f58 release v1.120.3

Satellite

  • 8b7ff29 satellite/metabase: simplify Spanner scan item for ranged loop
  • dadcc87 satellite/metabase/rangedloop: reduce TestRangedLoop_SpannerStaleReads flakiness
  • 0688c4f satellite/satellitedb: remove spanner emulator sequence workaround
  • 6b925fe web/satellite: truncate version ID for larger screen sizes
  • ee2a863 satellite/console: prevent duplicate CunoFS beta form submissions
  • 3b58962 satellite/admin: check active projects to delete user
  • 13f7eb3 web/satellite: slight UI updates for CunoFS Beta form
  • cd71d75 satellite/satellitedb: drop nodes tables secondary indexes
  • ab43218 satellite/migration: move testdata migration helpers out from cmd/ package
  • e9457a7 satellite/orders: try fix TestUploadDownloadBandwidth
  • 7203bc7 satellite: mud definition for (selected) satellite components
  • bb0c51a satellite/rangedloop: standalone executor for ranged loop
  • 2286d17 web/satellite: UX fixes for create bucket flow
  • 75ab3d7 web/satellite: bucket object lock improvements
  • 5697692 web/satellite: show token balance on token card view
  • f4cfbd4 satellite/run: modular executor for satellite
  • 74b9ac4 satellite/audit: stanadalone executor for auditing specified nodes
  • 533e8f5 satellite/{overlay,repair}: rename GetNodes to GetActiveNodes
  • 70e2226 satellite/overlay: add stale read to GetActiveNodes
  • afccb34 satellite/metabase: add method for bulk deleting objects from bucket
  • 56468cd satellite: add more logging around GC
  • 5df2041 satellite/nodeselection: weighted selector should panic, even without nodes
  • 4b753b5 web/satellite: fix delete versions dialog title text
  • 5c363a6 web/satellite: fix invite link behavior when sso is enabled
  • 0c3e475 satellite/orders: remove unused flag
  • 0bc4066 satellite/orders: flag to reject SN orders
  • 47d3067 satellite/audit: pass segment info to restored_from_trash event
  • 11ef3b8 satellite/repair: fix TestObserver_PlacementCheck intermittency
  • 9c56cf8 satellite/piecelist: ranged loop observer to generate piece report for a node
  • 254960a satellite: more mud files to define dependencies
  • 208736f satellite/audit: support full piece files with the dedicated audit executor
  • d3e4f5c satellite/metainfo: ignore bucket placement if self serve is disabled
  • 7a3fe8b web/satellite: truncate version ID on all screen sizes
  • d609d46 cmd/satellite: restore-trash shouldn’t bail on a non-existent node id
  • 9f8d7f7 satellite/metainfo: add migration mode flag
  • 1efa87f web/satellite: do not fetch lock status for delete markers
  • ad053c2 satellite/gc/bloomfilter: use minimal overlay DB implementation
  • f8a856f web/satellite: fix team member names
  • c762407 satellite/metabase: remove ListObjectsWithIterator
  • 7a381f6 satellite/accounting/nodetally: save tallies in batches
  • 0d2f88c satellite/payments: add functionality to update credit card
  • 3beb47b satellite/metabase: implement IsLatest
  • 79bf6e5 satellite/metainfo: propagate IsLatest to responses
  • 7cffc5f satellite/metabase/rangedloop: speed up slow test
  • 02c2979 satellite/metainfo: switch to ListObjects
  • 048fe50 satellite/satellitedb: optimize TestAddNodes
  • f8e5a1e satellite/metainfo: add leap year tests for default retention
  • 7aa32b5 satellite/mud: add missing metainfo.NewMigrationModeFlagExtension component
  • f123338 satellite/metabase/rangedloop: move logging to RunOnce
  • e6e52ec satellite/satellitedb/satellitedbtest: use DB lru caches for tests
  • 7d08b87 satellite/metainfo: fix ListObjects endpoint
  • 541c650 satellite/metabase/delete_bucket: avoid full table scan on bucket delete
  • 11ee8e6 satellite/nodeselection: don’t use variable array length for NewAttributeFilter
  • a3a5cb0 satellite/metabase: remove “less than new redundancy repair shares” check
  • ee6b6c2 satellite/payment: add endpoint to get pricing per partner and placement
  • 559942c satellite/console: enforce form field length limits
  • eb2e2fe satellite/metabase: apply same optimization to postgres delete objects
  • a1f1f2c web/satellite: update lock status icons on buckets table
  • 8bd892e satellite/nodeselection: support adding balast or powers to weighted selector values
  • 1343a7a satellite: Add admin endpoint set status
  • 30dfe58 satellite/satellitedb: make DDL changes during spanner DB creation
  • 9b235f5 satellite/{console,web}: improve cunoFS form submission
  • cc31888 satellite/metainfo: fix default retention leap year tests
  • c80890e satellite/metabase: adjust maxSkipPrefixUntilRequery
  • 8b49afb satellite/metainfo: fix errors in object upload default retention tests
  • c147f04 satellite/metabase: add tuning params for ListObjects
  • 4ba50ce satellite/{metainfo,metabase}: don’t set AllVersions when querying unversioned
  • 2cc9f42 web/satellite: add functionality to update credit card
  • 59f0e12 web/satellite: change remove member text for invites
  • 1db83ea satellite/console: add placement to project info struct
  • efa8187 satellite/nodeselection: support custom arithmetic operations
  • f068df7 satellite/metabase: only query extra for !recursive
  • 5007534 satellite/metabase: optimize unversioned listing
  • f7892df satellite/metabase: run segments loop with medium priority
  • 05e104c satellite/sso: patch SSO functionality for entra
  • b6eee9a satellite/metabase: fix Spanner ListObjects boundary condition
  • 359adc5 satellite/metainfo: improve error handling
  • a97073e web/satellite: fix bugs with default retention setup
  • 2931feb satellite/{analytics,web}: extend CunoFS beta form with first and last names
  • fd5c733 web/satellite: Fix cunoFS capitalization
  • 611b9bc web/satellite: Fix access generated on domains page

Storagenode

  • d9e2a6f storagenode/hashstore: some Compact optimizations
  • 74b0628 storagenode/hashstore: hashtbl page cache abstraction
  • 32bd2bf storagenode/hashstore: background compact active first
  • 33ba5ba storagenode/hashstore: remove monkit from hashtbl
  • 24fca8d storagenode/mud: remove log wrapper
  • 1587997 storagenode/run: use configurabe log for storagenode/storagenode cli
  • a2f97a1 storagenode/hashstore: avoid pessimistic compactions
  • 360e543 storagenode/hashstore: add compaction progress stats
  • 4842a25 storagenode/hashstore: tag stats of the stores with name
  • 558ac17 storagenode: handle Bloom Filter Manager and retain.Service in the same way
  • 78f1637 storagenode/gracefulexit: try fix flaky TestChore
  • b367039 storagenode/piecemigrate: actually implement it
  • 5e6b9ac storagenode: wire piecemigrate up
  • 6d8f1d1 storagenode: enable per-satellite control over migration
  • b519934 storagenode: add more stats to piecemigrate
  • e596d78 storagenode/hashstore: export collision error
  • 2214f05 storagenode/piecemigrate: detect and remove duplicates
  • 5a74427 storagenode/piecemigrate: trim space while parsing config
  • cf3717e storagenode/piecemigrate: run the migration continuously
  • 27d5660 storagenode/piecemigrate: add an optional delay to the migration
  • 4a667a2 storagenode/monitor: use config.storage2.Monitor.Interval
  • a2d239c storagenode/piecestore: deprecate a bunch of unused config flags
  • 272668d storagenode: experimental reverse collector
  • 5de375e storagenode/cleanup: cleanup jobs use the runners
  • 57ec179 storagenode/hashstore: explicitly test some primitives
  • da32b93 storagenode/hashstore: Range should update stats
  • fb3c73b storagenode/hashstore: check fallocate first
  • c2ea987 storagenode/mud: use meta folder for other meta files
  • d106b81 storagenode/hashstore: better background compact timing
  • ec7f02b storagenode/piecemigrate: don’t mark satellites as finished
  • 0e593f6 storagenode/mud: fix initialization of self-signed tags
  • 32fbbfc storagenode/cleanup: disable intermittent test safe loop
  • 8951372 storagenode/cleanup: add flags to turn off cleanup chores
  • d955728 storagenode/piecemigrate: make enqueue respect context cancelation
  • 115e7a3 storagenode/piecestore: eagerly open hashstores
  • 3f74ac6 storagenode/hashstore: speed up tests
  • 592683f storagenode: add usedserials monitoring
  • f56a1c6 storagenode/peer: only subtract hashstore accounting if not dedicated disk
  • f81b6bf storagenode: add hashstore as lifecycle item
  • 4113651 storagenode: register piecemigrate.Chore as a mud component
  • d00177b storagenode/piecemigrate: don’t log when there’s no work done
  • 1068913 storagenode/collector: disable the migration backend in tests
  • 95f8cee storagenode/piecemigrate: make expired pieces migration optional

Test

  • 4d10671 testsuite/playwright-ui: make tests work with enabled object versioning and lock
  • 40e7763 private/testplanet: alternate between migration states more often
2 Likes

I’m happy to see all the hashstore work. So more people get to kick-the-tires.

Not me. I mean other people :wink:

I wonder if the envisioned future is hashstore only? Is there a plan to stop developement of the classic storage and pursue only the hashstore impruvements?
I’m thinking that some of us would preffer to stay on classic version + badger cache, because the performance is almost similar to hashstore, and not trade 25% of storage space.
Would this be an option for the future, or at some point the migration will be not optional anymore?

3 Likes

What do you mean by trade 25% of the storage space?
I also saw on another thread, that as of v120 everything will be migrated to hashstore.

I think it was that hashstore uses 1GB datafiles… and it keeps track of all the pending deletions until the file is supposed to be 25% deleted… then it compacts the file and rewrites it. I can’t imagine it would happen in real life… but I guess it’s possible all your datafiles could be pending-deletion equally right up to 25%… so 25% of your used space could be trash-waiting-for-compaction? Sounds like a 1-in-a-billion chance.

Or someone else can explain it if I understood it wrong.

My understanding is that compaction starts when the active one of the 2 hashstores per satellite hits the 25%. So it is not based on a single file. :thinking:

Here on the other thread:
https://forum.storj.io/t/tech-preview-hashstore-backend-for-storage-nodes/28724/8?u=snorkel

1 Like