Release preparation v1.138

Release candidate v1.138 is already deployed on QA Satellite

Changelog

General

  • 2a46f23 Makefile: enable check-tx linter
  • 2896324 docker: use generic tags annotation for publishing modular images
  • 9505c80 changestream: skeleton service for procesing Spanner changefeed
  • b51eec3 all: fixing test stability
  • 9dcf0c5 all: skip flaky tests
  • 831f029 all: skip flaky tests
  • 671d6e6 all: use testcontext correctly
  • 2a8d9eb go.mod: Update gorilla/schema dependency
  • d6d7bf2 go.mod: Update The Go Programming Language dependency
  • 696443c go.mod: Indicate to use Go 1.24.7
  • 9bc6a1e all: fix some noctx linter issues
  • e8d131c ci: make builds parallel for benchmarks
  • d8f0b2f release v1.138.2

Satellite

  • e208b6e satellite/metainfo: new nodes should not break success tracker monitor
  • 6f86a1d satellite/console: abbreviate project deletion API flow
  • e1b3e87 satellite/{entitlements,satellitedb}: implement entitlements DB methods
  • b426665 satellite/metabase: fix DeletePendingObject for pg and crdb
  • 80a29fc satellite/entitlements: add Entitlements service
  • 6f448aa satellite/metabase: fix commit object when object version is negative
  • ebe7ab3 web/satellite: Remove unneeded prefix from billing breakdown
  • e9853e7 satellite/{console,db}: add method to list projects pending deletion
  • c1a70c5 satellite/audit/reporter: Add Monkit counter apply audit
  • 524aaac satellite/console: add pending delete project deletion chore
  • 0ce28e0 satellite/{console,db}: add method for getting event-user to delete
  • ff26ed3 satellite/{console,entitlements}: integrated entitlements service with project creation
  • 85f636c satellite/nodeselection: multi and fixed helpers
  • 2147402 satellite/repair: jobq re-push should keep stat fields
  • 909326b satellite/metainfo: integrate entitlements service with bucket creation
  • e744106 cmd/satellite: added command to set entitlements.NewBucketPlacements
  • 67a0440 cmd/satellite: extended command which sets entitlements.NewBucketPlacements
  • 780ab0e web/satellite: added disclaimer to ‘Add Funds’ flow
  • 2528e20 web/satellite: add amount to second step of ‘Add Funds’ flow
  • ae6e905 satellite/console: increased max add funds amount value
  • 26e0459 satellite/payments: do not escalate trial freeze for non-active users
  • 7672d63 cmd/satellite: add “delete-non-existing-bucket-objects” command
  • 78f5d70 satellite/repair/checker: fix flaky TestIdentifyIrreparableSegmentsObserver
  • d6853e0 satellite/metainfo: make errors for upload to buckets without OL consistent
  • 748f97d satellite/changestream: utility to convert change stream event to AWS compatible event
  • ec3ecd2 satellite/accounting/nodetally: fix flaky TestExpiredObjectsNotCountedInNodeTally
  • 0fb4381 satellite: remove VerifyQueue from core peer
  • a5d06e8 cmd/satellite: fix data race for TestSetNewBucketPlacements
  • d6a34da satellite/payment/billing: fix race in TestUpdateTransactions
  • a7c4f3b satellite/repair/checker: fix flaky TestRepairObserver
  • 2ba2fcf satellite/metabase: fix Cockroach DeletePendingObjects
  • 7b51e4b satellite/repair/checker: more fixes for flaky tests for observer
  • 40d3ede satellite/console: expand data deletion chore config
  • 9bb90d9 satellite/payments: try fix TestInvoiceByProduct flakiness
  • 1b9cc99 web/satellite: hide placement pricing on change
  • e23aa33 satellite/payments: do not apply minimum fee to 0 invoices
  • a168f20 satellite/console: reduce activation token expiration time from 30 to 10 minutes (#7595)
  • a36d3bf {satellite,storagenode}/Dockerfile: use newer Go for building the image
  • dff1e8e satellite: use entitlements product mappings in billing
  • fae70ed satellite/console: fix detailed report issues
  • dc23b4d satellite/admin: insert entitlements.NewBucketPlacements on project creation
  • 9b3671c satellite/changestream: publish events to pubsub topic
  • 65ebc5b satellite/metainfo: store API key tails on basic validation
  • dd9c0bd satellite/payments: always fallback to default pricing
  • e84a089 satellite/metainfo: fix data race for tails handler

Storagenode

  • 4ed6339 storagenode/cmd: hide advanced storagenode flags
  • 54ecb30 storagenode: refactor hashstore migration state reporting to use function

Test

  • 1d21130 all,private/testplanet: make tests run parallel
  • 01e0613 testsuite/playwright-ui: Enable managed encryption config
  • 9457cb5 private/testplanet: use OpenRegistrationEnabled to check if token is needed

Uplink

  • fa7bc70 cmd/uplink: remove LongTailMargin flag
3 Likes

It was mentioned in another thread (see below), that 5% of nodes are updated to new uploads to Hashstore, controlled by $StorJlads directly. Does the release version has anything to do with this gradual rollout of Hashstore, and does $StorjLads have a time table projecting planned roll out of the change?

I believe it was already in the previous release. And you can’t prevent upgrading to a newer release at some point anyway. The moment your node is falling behind too far with the version number it will stop the uploads.

There is an opt out flag. If you want to stay on piecestore you can opt out and still upgrade to the latest version.

There is no time table as far as I know. We increase the percentage and watch if everything continues to work just fine. Than we decide the next step.

I dont know why people prevent it, my egress raised after memtbl activation.
but hashstore eat ram and virtual memory like hungry. with 17 nodes 100gig RAM+100gig swap then working ok.

My question is, why can 17 nodes consume so much RAM memory and is it safe or recommended to use swap?

Only like 10% of allocated virtual memory is used. So it doesn’t really need that much RAM.

As i know this problem is only on windows, linux as i herd dont have this problem.
As i understand if node reserve ram it cant over reserve it like linux, it will be reserved event if it not used. so you can be out of memory even if half of it free.

I’m running 4 nodes on windows 10, and have already reached 28GB of 32GB of memory, do you have any suggestions?

It is OK, i do not see any problem here, it also use free ram for cache. I you nodes start to turn off without any error in logs then you have problem.

So, not all of the 28GB will necessarily be used?

windows will prioritize what to put there.