Release preparation v1.84

New release candidate is already deployed to QA Satellite



  • 1525324 satellite/uploadselection: avoid String conversation of location during node selection
  • e3d2f09 web/satellite: add support link to upgrade account STORJ token flow
  • ddf1f1c satellite/{nodeselection,overlay}: NodeFilters for dynamic placement implementations
  • a4d68b9 satellite/metabase: server-side copy copies metadata
  • ece0cc5 web/satellite: fix bottom spacing for all pages
  • ced8657 web/satellite: removed unused images
  • 8b4387a satellite/satellitedb: add tag information to nodes selected for upload/downloads
  • 70cdca5 satellite: move satellite/nodeselection/uploadselection => satellite/nodeselection
  • 5fc6eaa satellite/{console, web}: display accurate legacy free tier information in upgrade modal
  • 074457f web/satellite: add folder sharing
  • 97a89c3 satellite: switch to use nodefilters instead of old placement.AllowedCountry
  • 1d62dc6 satellite/repair/repairer: fix NumHealthyInExcludedCountries calculation
  • 73d65fc cmd/satellite/billing: don’t fail the overall process if an individual invoice fails
  • 4e876fb web/satellite: update upload modal
  • 4108aa7 satellite/console,web/satellite: Fix project limit checking
  • a9d979e web/satellite: update default multipart upload part size
  • 1f92e7a satellite: move GC sender to Core peer
  • 9370bc4 satellite/{nodeselection,overlay}: bump common and fix some potential issues
  • 7e03ccf satellite/console: optional separate web app server
  • 465941b satellite/{nodeselection,overlay}: use location.Set
  • 062ca28 web/satellite: add sharing option to dropdown in buckets page
  • 99128ab satellite/metabase: reuse Pieces while looping segments
  • e8fcdc1 satellite/metainfo: set user_agent in bucket_metainfos on bucket recreation
  • 4ee647a satellite: add request id to requests
  • 5234727 satellite/repair/repairer: fix flaky TestSegmentRepairPlacement
  • 9576190 web/satellite: update Vuetify proof of concept
  • 5272fd8 satellite/metainfo: do full bucket validation only on create
  • 47a4d49 satellite/repair: enable declumping by default
  • 0a8115b satellite/{console,payments}: fix handling for autofreeze flow
  • c96c83e satellite/payments/stripe/service: add manual payment with token command
  • df9a6e9 web/satellite: lint Vuetify files
  • 0303920 satellite/metainfo: remove unused method
  • 2ee0195 satellite/payments: extend billing chore functionality to upgrade user
  • 583ad54 satellite/{payments, console}: added functionality to get wallet’s transactions (including pending)
  • b1e7d70 satellite/payments/billing: fix test
  • 23631dc satellite/accounting: fix TestProjectSegmentLimit*
  • 7cc873a satellite/payments: prevent removing other users’ cards
  • 5317135 satellite/payments: fix config value for auto upgrade user tier flow
  • afae5b5 web/satellite/vuetify-poc: allow navigation drawer to be toggled


  • 4cb8518 storagenode/pieces: enable lazyfilewalker by default


  • 05f3074 docs/testplan: add project cowbell testplan (#6001)
  • 0f4371e scripts/tests/{backwardcompatibility,integrations}: add test scripts


  • a85c080 docs/blueprints: certified nodes
  • 5a1c3f7 storage/reputation: logging changes to node scores (#5877)
  • abe1463 payments/stripe/invoices: add token payment to overdue invoice payment
  • 31bb6d5 cmd/tools: add tool to migrate segment copies metadata
  • 9a871bf go.mod: bump
1 Like

This will not restore filewalker, if one have filewalker disabled, right?
(dissabled by command:
“storage2.piece-scan-on-startup: false”
will not undo this setting if updated to 1.84? just checking)

No, it won’t. But lazy file walker also impacts garbage collection, so you still see a benefit there.

It seems the change just sets the default config parameter, and it can still be changed in the config.

The point being — i prefer “aggressive” filewalker on node start: high CPU usage for short time. I don’t want that activity to be spread out and don’t see the reason to. So I’m going to disable it on my nodes, as I don’t see any benefits. (Also from the principle: “not broken-don’t fix”).

Of course, folks with slow storage shall prefer lazy filewalker, no question about it.

1 Like

If your system could already handle it there won’t be any noticeable difference. It’s not going slower, just at lower priority. So if the IO bandwidth is there, it’d still be used. I personally also like that it’s a separate process because that makes it easier to see whether it’s running or not.


if there’s something else running on your computer beside storj then you won’t like that aggressive disk i/o and CPU peak.

Even with only Storj it can help you win races by prioritizing upload and download activity. There’s a reason they’re making it the default.


There is plenty running. But file walker does not touch data, only metadata. And metadata is located on an SSD, with massive bandwidth. I observe on node start it uses 100% of a single CPU core and few percent of SSD bandwidth for a few minutes, and then quiets down. Neither of these have any effect on other processes on the server - -there is more than one core and plenty of SSD bandwidth.

Ah! This is good to know! Then indeed it shall have no impact.

Mmmm… how? According to the other comment the difference would be that the nodes with slow disks won’t be clogged for 10 min on startup. Not servicing requests instantly for 10 min is not a such a big deal. Or are you talking about nodes where file walker used to take much longer? (hours?) then I agree.

Either way, I’m convinced, I’ll keep defaults :slight_smile: I’ll just apply the principle differently: not broken – don’t change defaults :wink:


well, not everyone here use ssd and/or multiple cores cpu. One of my nodes running on a poor old hdd with IDE connection and 2 cores old cpu. Lol. About the option, the default one should be suitable for most node operators. In this forum I usually hear people complain about high disk I/O at node boot time, so the default lazy file walker will be good choice.