Release preparation v1.104

New release candidate is already deployed on QA Satellite

Changelog

General

Satellite

  • 505c110 satellite/overlay: update benchmark to have node tags
  • 49663eb satellite/gc/bloomfilter: update tests to use both observers
  • 8f5ac63 satellite/db: added passphrase_enc and path_encryption columns to projects table
  • 4a45ffb satellite/db: add role column for project_members table
  • aa573cb satellite/{console, web}: set length restrictions on onboarding inputs
  • a5263ac satellite/metabase: move GetTableStats implementation to the adapter level
  • e3b1ca8 satellite/metabase: accept spanner as a first class db implementation
  • 1e5eff4 satellite/{payments,console}: add endpoint to manage user billing address
  • dcd3ccc satellite/console: remove analytics rate limit
  • a2db5e7 satellite/console: allow only 6-digit code for account activation
  • 532942c satellite/metabase: abstract away transaction formation
  • c23ba19 satellite/contact: accept self signed tags from storagenodes
  • 60a01eb web/satellite: add UI to manage billing address
  • 655a7bd satellite/{payments,console,stripe}: add endpoint to manage user tax information
  • 9802535 web/satellite: add UI to manage tax IDs
  • 0aad164 satellite/metabase: spanner encoding/decoding for encryptionParameters
  • 31a9ca9 satellite/metabase: (Encode|Decode)Spanner for ObjectStatus
  • f04a3c5 satellite/metabase: use spannerutil.Int in TestingGetAllSegments
  • d230637 satellite/metabase: split out PendingObjectExists() for db adapters
  • d7f7d1c satellite/metabase: split out CommitSegment for db adapters
  • 094d27b satellite/metabase: split out CommitInlineSegment for db adapters
  • 6d475df satellite/metabase: split out TestingGetAllObjects for db adapters
  • c5591ed satellite/metabase: TestingGetAllObjects for spanner
  • 3ef1143 satellite/{console, db}: add functionality to update project member role
  • bb35101 satellite/console: added endpoint to update project member’s role
  • 034770a web/satellite: added functionality to update project member’s role
  • c689688 satellite/metabase: (Encode|Decode)Spanner for ObjectKey
  • 860bbdd satellite/console: restrict deletion of non-owned API keys to members without Admin role
  • a224d4a satellite/overlay: increase free space buffer to 5GB (#6948)
  • 37e3715 web/satellite: ui/ux updates (#6947)
  • a4296c1 satellite/console: Add rate limiting to api key handlers
  • 712add7 satellite/{console,web}: add config for billing information tab
  • 44c8bd7 web/satellite: remove dynamic tax id length restriction

Storagenode

  • 448d663 storagenode: fix disk monitoring for FreeBSD
  • aa84cb6 storagenodedb: buffer up piece expirations
  • b9fe1e7 storagenode/orders: Remove unnecessary V0 order version checks
  • f00bf1e storagenode/orders: Open “writable unsent orders file” less
  • 016090d storagenode/piecestore: fix monitoring cardinality issue
  • ed819e9 storagenode/blobstore: remove preallocate size
  • 491c019 storagenode/blobstore/filestore: by default turn off fsync
  • 962ff04 storagenode: add bandwidth DB write cache
  • 4d93a3c storagenode/piecestore: add bandwidth only when settling orders
  • 8c42f43 cmd/storagenode: fix flaky test
  • 5bf9a60 storagenode/blobstore: remove some dead code
  • d68abcf storagenode/pieces: update used space on trash-lazyfilewalker completion
8 Likes

After installing the new storage node version it might be a good time to run the used space calculation one time to correct the incorrect trash size this bug caused. After that you should be able to turn off the used space calculation again.

5 Likes

Are there any plans to skip 1.103 and go straight to this? I run it on most of my nodes and don’t see anything that would suggest that it can’t be brought into production since it contains important fixes.

We are skipping v1.103 for nodes

5 Likes

Is there any one time start command? that cold just start this function one time?

1 Like

What I did was stop the node, flip storage2.piece-scan-on-startup to true, start the node and flip storage2.piece-scan-on-startup to false again.

3 Likes

It would be great. I asked many times for something integrated. In the meanwhile I asked to chatgpt to write for me a script that restart containers changing false in true and come back after restart.
I’m lazy

does the lazy filewalker also do this job just much longer?

How well did it write it? How much editing did it require afterward?

it’s easy and no editing. It’s just a sed command ( sed -i ‘s/storage2.piece-scan-on-startup: false/storage2.piece-scan-on-startup: true/’) after stop and another for start. let gpt do all dirty job

1 Like


I don’t know if it’s just me, but sending and receiving is the same as usual, but the graph display has changed as if it’s coming in all at once. (After update 1.104.1) There doesn’t seem to be any particular problem, so I’m leaving it alone.

If you’re using the API, v.1.104 also introduces caching bandwidth writes to DB, and the API does not currently read from the cached DB.

3 Likes

I don’t know how to solve it, but it doesn’t seem to be a problem with the storage, so I’ll leave it alone and refer to the ‘storj_exporter’ forum post. Thank you.

Is there any plan to change this so 1.104.x and onward pulls from the cached DB? (I know Im free to submit a PR myself)

I didn’t see a ticket like that. → not planed

2 Likes

Is this fixed by using the metrics api on the nodes themselves? Ive attempted to get this working, but cant seem to. So im stuck on the third-party exporter for now.

Yea metrics endpoint works fine. If you have already a grafana instance running it shouldn’t be hard to use the metrics endpoint.

1 Like

Does it still go thru prometheus? Ive added the debug.addr: “:5999” and point prometheus to the node, but dont get any output on grafana.

My prometheus scraper:
- job_name: storj0
metrics_path: /metrics
static_configs:
- targets: ["172.20.0.80:5999"]
labels:
instance: "storj0"

I think by default the storage node will open a random port. Make sure you set debug.addr: ":5999"

I have and can curl from the port I set. There’s potential the issue is my docker networking.
Update: I can successfully ping the metrics endpoint in a storj node from within my prometheus container. Now its prolly a configuration issue. weeeee
Could it be I have the wrong log level or something?

1 Like