Release preparation v1.113

New release candidate already deployed on QA Satellite
Object lock feature is enabled for beta testing

Changelog

General

  • c30f5c2 cmd/tools/piecestore-benchmark: don’t send EOF before resp was delivered (downloads)
  • b641101 go.mod: bump storj.io/common
  • bd8e7b9 cmd/tools: add storagenode-benchmark
  • b050725 cmd/tools/node-cleanup: remove old unused tool
  • f24edc0 cmd/tools/piecestore-benchmark: flags to skip benchmark steps
  • cb83a21 cmd/tools/piecestore-benchmark: flag for working directory
  • c8113bd Jenkinsfile: move PG rolling upgrade test to separate build
  • 0e9beb7 Change Wallet list from ZkSnyc Lite to Era
  • 16435ed shared/dbutil: add missing impl switch cases
  • 2bc6b27 go.mod: bump storj.io/common
  • 513588c release v1.113.0-rc

Satellite

  • 9341a38 satellite/satellitedb: enabled containment.go spanner test cases
  • 80bdef6 satellite/satellitedb: Fix RollupArchives Chore for Spanner
  • 1fbd63d satellite/metainfo: remove unused field from CommitObject struct
  • cd43b78 satellite/satellitedb: enabled customers.go spanner test cases
  • 6ad7128 satellite/metainfo: use granular Object Lock permissions
  • edb1ae6 satellite/satellitedb/consoledb: implement Spanner for projects
  • 454575a satellite/{metainfo,console}: combine object versioning and lock betas
  • 561cfa4 satellite/{console,web}: add object lock enabled config
  • f98ee7b web/satellite: associate versioning beta with object lock beta
  • 47999ea satellite/metainfo: rename ObjectLockEnabledForProject fields
  • 28045ee web/satellite: rework file versions UI
  • 5428ee4 web/satellite: improve delete in versioned buckets
  • 3f0d26f web/satellite: improve error handling for bulk file delete
  • 2b13aa2 satellite/metainfo: add UUIDsFlag for uuids list
  • 6ba9bf3 satellite/satellitedb: Fix spanner.NullJSON scanning in satellitedb
  • a9114f6 satellitedb: fix UpdatePieceCounts for spanner
  • 21e6632 satellite/metabase: fix ListSegments argument
  • af568c6 satellite/accounting/tally: fix TestTallySaveTalliesBatchSize for Spanner
  • 63ffce3 satellite/satellitedb/dbx: fix multifield unique indexes for spanner
  • 993eb8b satellite/console: enabled spanner account freezing tests
  • 1c33eab satellite/payments: enable spanner tests
  • 08e07b6 satellite/satellitedb/dbx: fix constraint error check in spanner
  • 40d9db8 satellite/metabase: add monkit for zombie/expired object deletion
  • e1d12d4 satellite/metabase: allow setting legal hold via BeginObjectExactVersion
  • 1050dc4 satellite/satellitedb: Wrap Spanner operation canceled errors to map to context.Canceled
  • a7b3ab8 satellite/satellitedb: add missing default cases for switches
  • 7defe7e satellite/satellitedb: fix node disqualification query for spanner
  • 7ade07d web/satellite: add UI for creating bucket with object lock
  • a070b57 satellite/{db,console,accounting}: add object lock status to bucket response
  • 064adf5 web/satellite: add object lock status to bucket details dialog
  • 549234b web/satellite: lock object functionality
  • 5b7a536 web/satellite: fix pricing plan step success on account setup
  • afe9c4d satellite/{accountfreeze,console,db}: escalate trial expiration freeze
  • 41173bd satellite/{accountfreeze,console,emails}: send escalate trial expiration freeze email
  • b1813a3 web/satellite: notify user about trial freeze escalation
  • 9c76135 web/satellite: prevent deleting locked object version
  • e1e080e satellite/metainfo: return descriptive error for locked object deletion
  • 26442af satellite/metabase: make DeleteObject respect governance and legal hold
  • 54b4e42 satellite/satellitedb: expose the error from billing Insert retries
  • b5d3e61 satellite/satellitedb: implement Spanner verison of repair queue methods
  • 61032fb satellite/metabase: make GetObject methods return legal hold
  • 6414346 satellite/metainfo: make GetObject, DownloadObject return legal hold status
  • 975f78a satellite/metabase: adjust copying objects for legal hold
  • 24a0f4d satellite/metainfo: adjust FinishCopyObject endpoint for legal hold
  • fd2e8dd satellite/metabase: add SetObjectExactVersionLegalHold method
  • 8a82c79 satellite/metabase: add SetObjectLastCommittedLegalHold method
  • a2f29ef web/satellite: improve delete loading indicator
  • e3913fe satellite/metabase: add GetObjectExactVersionLegalHold method
  • 0330fae satellite/console: fix flaky account freeze test
  • 7be87a8 satellite/metabase: allow DeleteObject to bypass governance
  • 365f976 satellite/metainfo: enable governance bypass for BeginDeleteObject
  • 9ff8423 osatellite/satellitedb/dbx: better Spanner isConstraint
  • 3a3dda5 satellite/orders: fix GetProjectDailyBandwidth for Spanner
  • 2028181 satellite/satellitedb/consoledb: fix GetEncryptedPassphrase and RawDB
  • f867dcd satellite/metabase: add GetObjectLastCommittedLegalHold method
  • 5d4b106 satellite/metainfo: implemented SetObjectLegalHold endpoint
  • a550665 satellite/metainfo: implement GetObjectLegalHold endpoint
  • d82ab0b satellite/metabase: ensure GetObjectRetention endpoint respects governance mode
  • 7eb7880 satellite/metabase: adjust Move objects to respect legal hold config
  • 4555b3a satellite/metainfo: adjust FinishMoveObject endpoint for legal hold
  • f47b28b satellite/accounting/live: change as of system interval for tests
  • 4c287e1 satellite/metainfo: reduce compression memory usage
  • 175fb2f satellite/metabase: fix choose adapter test
  • 79d153e satellite/console: delete project endpoint
  • 1fa8082 satellite/metainfo: add test for GetObjectRetention with delete marker
  • fabcb65 satellite/console: add object lock UI flag
  • 9e85d28 web/satellite: update object lock config flags

Storagenode

  • 70ada4c storagenode/pieces: flat-file piece expiration store
  • 3ca9625 storagenode/piecestore: Add reason download cancellation
  • 5175397 storagenode/monitor: decrease readability/writability check log level
  • 1fab07a storagenode/blobstore: don’t error for Delete if file doesn’t exist
  • e4e3735 storagenode/collector: remove confusing log entry
  • 07a76f0 storagenode/orders: fix logger of order sender
  • 05258ae storagenode/nodestats: expose reputation/suspension/disqualification information to prometheus
  • 43f145b storagenode/pieces: log the count and duration with the used space file walker results
  • ea54374 storagenode/piecestore: fix error handling of noise EOF messages
  • cda77f2 storagenode/peer: fix configuration of the new flat-file expiration store
  • 3792e05 storagenode/blobstore: close badgercache in unit test
  • b4ef29d cmd/storagenode-benchmark: fix error handling
  • b11dba3 Revert “storagenode/blobstore: don’t error for Delete if file doesn’t exist”
  • 31eea19 storagenode/blobstore: make use of size hint information if present
  • 278c324 cmd/storagenode-benchmark: use the configured hash algorithm

Test

  • e459077 private/testplanet: allow reconfiguring API key version for uplink
  • 9c6296c private/testplanet: catch context canceled from spanner
  • 23c3f6f shared/dbutil/dbtest: rename pgtest package
  • 740d4e3 shared/dbutil/dbtest: add pick without skip
6 Likes

Will storagenode/pieces: read+delete piece expirations from both stores · storj/storj@ba68d91 · GitHub be making it into 1.113?
Doesnt seem to be included in 1.113.0 rc, and given that the flat-file expiration store is being released here, I feel like it would be good to also include this fix/improvement.

3 Likes

Good one @pasatmalo! I agree. Without it we will hold on to expired pieces that are registered in the expired pieces database.

1 Like

I want to address the upcoming piece-expiration-flat-files update.
They say it will create one file/satellite/hour, if there are pieces that expire in that hour.
Could this be abused by someone to store one piece per each hour for a huge number of years, just to mess up our nodes? I didn’t made any math yet…
@edo I got the email about the new release candidate; I didn’t know when this post will be made. I should have waited a bit, but when I wait, I tend to forget. :sweat_smile:

3 Likes

This looks like a good update for SNOs: less confusing logging, flat-file expiry enhancement, used-space-filewalker tracking its own duration, prometheus upgrades and more!

2 Likes

We’re already storing 3-4 million files per-TB… so I’m not worried about at most 24-extra-expiry-files-per-day :wink:

Yeah, 8766 files/year is not a big concern.
The space used would be negligible, only the number of files would somehow count.
For 1000 years, almost 9 million files more I don’t beleive would matter.

No worries at all! In fact, by jumping in early, you gave me a nice little heads-up—so, thanks for that! :wink:

1 Like

Is there a maximum to the amount of time that people are able to enter for the expiry? Or is it literally infinite?

But all 9 million files in a single folder? Could that cause issues?

In my pieces expiration database I have many entries that expire at a date of 9999-12-31 23:59:59+00:00. I believe that’s roughly 70 Million hours from now.

# where to store flat piece expiration files, relative to the data directory
# pieces.flat-expiration-store-path: piece_expirations

Any reason why I can’t just set a directory anywhere and it specifically needs to be relative to the data directory?

1 Like

You naughty boy, you just want to move everything and make a mess everywere, don’t you? :older_woman:t5:
Just keep your hands off and let the team take care of it. They know best. :sunglasses:

2 Likes

No, life has taught me that if you want to do something right, you must do it yourself.

6 Likes

How’s flying your own airliner coming along? :smirk:

Perfectly fine. There were some instances where logging wasn’t working properly, causing billions of lines to be logged per week. I have identified the issue and gave strict instructions for it to be fixed because it falls under the “I don’t know what to actually log, let’s debug log everything, including a core dump at every piece upload” and the team said that they fixed it. The planes were flying properly after not being loaded with all those billions of lines.

There was also the instance of thousands of files being created, which will cause fragmentation (since they are append only files) being stored on the plane’s harddrives, causing the navigation systems to lag. We are currently in the process of getting that fixed (we are currently in the “denial” phase, they are defensive about the problem, ie “nope, nothing to see here, move along”). I would have said a simple “God damn it” and just change the storage path variable to not include the data path, but someone said this will cause the universe to implode. I’m in direct talks with God to clarify this. I’ll eventually get it fixed after we move from the “denial” phase.

Hahahaha! That made me laugh :slight_smile:
Ask Him if He can fix entropy. That one is a killer :slight_smile:

Also, you need to change your tag to “Prophet” :smile:

1 Like

Out of curiosity, I tested the new version on two of my nodes. Based on my experience in docker(I did not set the flat-expiration-store-path), It creates a new “piece_expirations” folder wherever your database files are kept. So if you have a custom path set for your DBs, it will use that for the flat piece expiration files also. At least that’s how it behaved on my nodes…

2 Likes

Thanks for that. I would assume then that the “data” the comment is referring to, is actually “database”.

@Mitsos

On Windows nodes, if the database path is set in the config.yaml, the piece_expiration folder is created within the specified folder. This way you can keep it on the same fast storage as the other databases.

1 Like