Release preparation v1.141

Maybe a little bit late, but still release version v1.141 deployed on QA Satellite and all prod satellites.
So changelog this time

General

  • 8ee2947 shared/modular: remove unusued, old code
  • 3d90358 cmd/tools/segment-verify: attempt to fix flaky TestVerifier
  • 913ef37 Jenkinsfile.{public,verify}: run all tests in parallel
  • fc14766 ci: improve spanner emulator change stream performance
  • 8e0334a Makefile: increase playwright-ui test timeout
  • 26a39ec Jenkinsfile.{verify,public): run tests one by one for now
  • 230bc2d docs: enhance CLAUDE.md with comprehensive architecture documentation
  • de0ceb0 Makefile: fix go version for darwing builds
  • f5ad7f9 go.mod: bump storj.io/common
  • 0b2d91d script: don’t use draft releases by default
  • e129eea build: Fix version caching for storagenode-modular
  • 32c13b5 release v1.141.0-rc

Satellite

  • 62934e5 satellite/{eventing,changestream}: monitor long running functinos with monkit
  • 69f31a1 satellite/metabase: remove encryptionParameters wrapper
  • 8ddbb30 web/satellite: hide team passphrase banner for the projects with managed passphrase
  • ff8c073 satellite/console: updated invoice and receipt description related to the to upgrade
  • f8d17cd satellite/eventing: use same config prefix for modular and non-modular
  • 1994faa web/satellite: hide segment info for new pricing
  • 70d4cbc satellite: Untangle Overlay Service from Repair Checker
  • 37525cd satellite: Untangle Overlay Service from Repair Repairer
  • aac1985 satellite/tally: add per-product storage remainder billing
  • 951d50e satellite/metabase: two roundtrip commit object
  • 6bd7ace satellite/console: add price summary field to products
  • 0339709 satellite/{console,payments}: update self-serve placement conditions
  • e9575f4 satellite: enable reputation cache for modular auditor
  • 4225c60 satellite/metabase: set ExcludeTxnFromChangeStreams to more places
  • 7c6fa8e web/satellite: update account setup for new pricing
  • aeda634 web/satellite: update upgrade dialog for new pricing
  • 44c483b satellite/console: update active projects filter
  • a382a94 satellite/metabase: lift delete unversioned out of finalize commit
  • 5739b17 satellite/metabase: make insert/update mutation code cleaner
  • 5199dae satellite/reputation: force update vetted_at when required
  • 76449a4 satellite/metainfo: updated placement validation on bucket create
  • eb9391b satellite/payments: support unit conversion at the time of invoice generation
  • 633cb36 satellite/payments: round up estimated product charges if needed
  • e495fc3 satellite/payments: omit segment invoice item if its price is zero
  • 969eb45 satellite: fix incorrect oauth host passed to backoffice
  • 1c7b788 satellite/metabase: two roundtrip commit inline object
  • c0855d4 satellite/metabase: drop CommitObjectWithSegments
  • 2f1d63e satellite/metabase: remove old CommitInlineObject
  • 1d442cc satellite/metabase: clarify an if statement
  • 5ac3037 satellite/metabase: fix ListObjects for negative pending object version
  • 30256cf satellite/eventing: fix version id in s3 events
  • bba9185 satellite/admin-ui: expand update project limits
  • 345c132 web/satellite: correct past month option on usage report
  • dc77bf0 web/satellite: show warning when using dots in a bucket name during creation
  • 8abafbc web/satellite: remove VWindow in VDialog mixin
  • 6287a49 cmd/satellite/users: fix typo in log statement
  • d4eecbd satellite/eventing: add observability metrics
  • c5e3df7 web/satellite: replaced explicit mdi icons usage with lucide icons
  • 3674854 web/satellite: upgrade vite version to resolve vulnerabilities
  • 90539b0 satellite/{stripe,web}: add report download to billing history
  • 38ecf8d satellite/{stripe,web}: represent empty usage correctly
  • 145185f satellite/metabase: use precommit query in copy object
  • c7dcf98 satellite/metabase: use precommit query inside object move
  • 65170f6 satellite/metabase: remove PrecommitConstraint
  • f1b0720 satellite/reputation: Update comment & remove unused param
  • 0973abb cmd/satellite: Add help message clarification
  • 9ae8ba8 satellite/eventing: remove s3: prefix from event names
  • 5dc0f48 satellite/{web,console}: fix self-serve placement UX
  • eb49477 satellite/admin: add global projects and users search
  • 7764871 satellite/admin-ui: add global search dialog
  • a02bf0b web/satellite: improve usage report dialog
  • 88924ce satellite/payments/paymentsconfig: improve products config
  • 8bff360 satellite/admin: handle nullable fields properly
  • 52e9f53 web/satellite: wire up compute instances to the backend
  • f8b2f99 web/satellite: improve compute UI validation
  • c13bdfc satellite/admin-ui: handle nullable fields properly
  • b5d6be7 satellite/{admin,ui}: require reason for mutation requests
  • 12cb883 satellite/admin-ui: require reason for mutation actions
  • 50a417d satellite/admin: add update project endpoint
  • 7288320 satellite/admin-ui: add update project UI
  • f9b4c83 satellite/admin: add reason to update user audit log
  • 4aa4c20 web/satellite: added instance details modal
  • 3a5813f satellite/admin: add audit log for disable user call
  • e333df6 web/satellite: wire up available instance configuration with the backend
  • dafa8b1 satellite/admin: add audit log to freeze/unfreeze actions
  • 125913e satellite/admin: add audit log to the toggle MFA call
  • 9194623 satellite/admin: add audit log to the create REST key call
  • 1da68aa satellite/console: add new Member user kind
  • 84fd4da satellite/console: feature flag for the Member accounts
  • e3f3c69 satellite/console: update registration to create member accounts when needed
  • 5720699 web/satellite: updated onboarding for member accounts
  • 0b4d91c web/satellite: added update compute instance type functionality
  • f132219 satellite/eventing: log the entire change record
  • e9a1665 satellite/console: allow Member account upgrades
  • 46c1742 satellite/console: added start free trial endpoint
  • 32cef87 satellite/metainfo: Add clarification comment
  • 07f3ea7 satellite/metabase: move object lock info to separate file
  • 74513a1 satellite/durability: enable placement check and declumping for durability check
  • e2d0894 web/satellite: handle project creation by Member account
  • 4102447 satellite/admin: add audit log to the update project limits call
  • 6044d89 satellite/admin: add project buckets endpoint
  • dd80408 satellite/admin-ui: list project buckets
  • af68ecf web/satellite: rework share dialog to have an extra step
  • 3ea570a satellite/{console,web}: updated condition for when to show compute UI
  • f5cae46 satellite/admin: add update bucket endpoint
  • 6e5d035 satellite/admin: extended update user kind to handle Member accounts
  • 48e8cad web/satellite: fix account setup with pricing plan

Storagenode

  • 081cf1e storagenode/hashstore: calculate dead bytes compared to the real log size
  • 53e7711 storagenode/hashstore: cleanups+better reclaimed accounting
  • 06f2a6e storagenode/hashstore: exact stats for hashtbl

Test

  • 912881b testsuite/playwright-ui: increase test timeout
  • 3c723ef shared/mudplanet/uplinktest: add Copy, Move, and DeleteMany functions
  • ce06998 testsuite/playwright: Enable fullyParallel
2 Likes

when it will be update to this version?
Because it marked on github as latest version.

me curious too, i tried to repull image in docker/portainer and its still on the old version

The docker image has not been updated in a long time. I think over a year.

The docker software runs it’s own two processes; the storage node process and the storage node updater.

The latter will first update the node, when a pointer tells it to

then maybe @Andrii or @Alexey should poke someone so it can be updated

Why?

The docker image is perfectly fine as it is :slight_smile:

There has been a few threads before, where users are actively asking if it’s really needed to use watchtower these days.

Do docker compose setups still need watchtower? - Node Operators / FAQ - Storj Community Forum (official)

1 Like

it does not autoupdate like they say it should, some are fine with this, some are not

but it should do as it says on the tin

You are misunderstanding how the storagnode docker container works.

A “regular” container ships with it’s binary files in a relative atomic way. A new version of the docker container is a new version of the resultant software.

Storagenode is different.

The storagenode updater process can update the executable, which it get’s directly from github. It has been done this way for a very long time. It’s robust, it’s fully automated, and it’s a method that makes sure that standardly installed nodes are updated in a continuous wave. This is great, because all nodes won’t update at the same time, which would result in downtime. It’s not needed to update the storagenode docker container, unless StorJ is targeting exploits between docker and the guest OS. This is seldom the case - which is a great thing.

If you really want to update manually, I’d suggest manually downloading the files that go in the bin directory from Github and manually swapping the files in your dockers mounted bin directory - or foregoing docker entirely and running the application directly on the guestOS

No i fully understand how it works, i do simply not agree that it should work that way (ie i want it to auto update - and i understand that at least you do not agree with that)

i also understand that storjlabs/storagenode:latest does not mean i get the latest version… again think that is a fault, latest should be the latest version released, or else what is the point in calling it the latest

but i also agree that everyone updating at the same time is a very bad idea, the bigger the storj network gets the more worse the idea gets

so maybe implement something in the autoupdater part that can be controlled from storj’s side… ie update nodes in a certain country first, maybe the country with the least amount of node, then go for the next smallest one and so on… and if storj makes an f’up then the damage is minimal, but then who should take the blame economic wise? the node or storj?

That is exactly what the current code does. No matter which docker version you pick it will download the binary from github following the rollout process controlled by storj.

5 Likes

so what is the estimate before i get updated? i’m on the latest docker image yet i’m on v1.139.6 still, even if i repull the image

It varies as to time of release to update as not all nodes are done at once. My node (run in docker) always updates on a fairly regular but variable basis after a new node release without having to do anything. The docker container only gets updated whenever it’s necessary like mentioned in an earlier post. My advice is to not worry about it and let it run automatically as designed.

1 Like

Update process won’t start until storj raises suggested version here: https://version.storj.io

I guess they stopped at v1.139.6 in order to slow down migration to hashstore a bit.

2 Likes

Its more or less random. Versioncontrol gives out a random seed plus a rollout cursor that is basically controlling the percentage of nodes that should update to the new version. So in some versions the random seed will tell your node to update early on and the following rollout your node might be one of the last one to update.

From storj perspective this rollout works great. It gives us full controll. We start with just a few nodes and every few hours we increase the percentage. If something looks wrong we can pause the rollout. If we need a hotfix we can start another rollout but with the same seed and it will update the nodes in the same order. Meaning the nodes that require the hotfix also get it first.

This rollout procedure protects the network from critical bugs that would otherwise take down the entire network. It gives us time to notice and fix it without impacting the customer.

6 Likes

see that makes sense to me :slight_smile:

Specifically, look at the processes>storagenode(-updater)>rollout>cursor string. When that gets to all f’s all nodes should be eligible for the update listed as the suggestion version.

BTW:
Address of version server is configurable. You can set it to version.qa.storj.io in order to get latest github release or even run your own version server.

Looks like 1.141.2 dont resolve problems with overused and free space