Release preparation v1.78

The new release version are deployed on QA Satellite
Available for testing



  • a21afed satellite/payments/stripe: avoid full table scan while listing
  • 16ccffb satellite/metainfo: test to confirm that retried pieces can’t be submitted with originals
  • 34b2a36 satellite/payments/stripe: apply egress discount to project charges
  • eeeac5f web/satellite: disable removal of SVG viewBox
  • eecb055 satellite/buckets: move Bucket definition
  • 7dc2a5e web/satellite: use access grants pinia module instead of old vuex module
  • 49c5e3d web/satellite: update Vue error page design
  • 4ee22e0 satellite/admin: add tests to admin auth
  • 34e6c50 web/satellite,satellite/console/consoleweb: update static error pages
  • d62dd0b web/satellite: use frontend config in Vue components
  • b3b619e satellite/admin: use system-given port in OAuth test to fix flakiness
  • e2abbc3 web/satellite: use frontend config in store modules
  • 6e86685 web/satellite: remove unused references to meta config values
  • 45d5a93 satellite/console/consoleweb: remove templating for index.html
  • 2193392 satellite/console/consoleweb: suppress index.html loading errors
  • bebfb91 web/satellite: don’t hardcode length limits in pwd strength component
  • 771d226 satellite/analytics: separate hubspot form for personal vs business
  • 2297040 web/satellite: add storage needs field to sign up
  • d402ed4 satellite/payments/stripe/service.go: fix typo in credit note memo
  • 114eda6 satellite/metainfo: remove sleep from upload limit test
  • 1373bdb web/satellite: use new buckets pinia module instead of old vuex modules
  • daf5264 satellite/console/consoleweb: create a fallback error.html
  • 27d9d9f web/satellite: use pinia projects module instead of old vuex module
  • 3f8eb58 web/satellite: add uploading large file notifications
  • 4ae05af web/satellite: use notifications pinia module instead of old vuex module
  • 53bd6bf web/satellite: fix redundant onboarding tour redirection
  • 9ac5183 satellite/consoleweb: improve freeze-status endpoint
  • 9fbad53 satellite/gracefulexit: extend GE tests using multple hash algo
  • 8632e28 web/satellite: show freeze warning banner
  • 54beea8 satellite/metabase: define a local ErrObjectNotFound
  • 4193197 satellite/*: changes to stop using storj.ListDirection
  • 3679e29 satellite: Remove remaining references to “partner ID”
  • 2405bc8 satellite/metabase: stop using the common error type
  • 8d31e13 web/satellite: use pinia object browser module instead of old vuex module
  • 19e9ca9 web/satellite: migrate last 2 components to use composition api and clean up dependencies
  • fbfe5aa satellite/metrics: remove code related to segments loop
  • 50afaa4 web/satellite: update naming of Team page
  • f40a0cb satellite/*: use typed lrucache and ReadCache
  • 1bc26e7 web/satellite: fix invalid references to first onboarding step
  • 6a55682 satellite/accounting/nodetally: remove segments loop parts
  • 6ac5bf0 satellite/gracefulexit: remove segments loop parts
  • 1aa24b9 satellite/audit: remove segments loop parts
  • 8056132 web/satellite: remove unnecessary session inactivity timer setup
  • 15b370f web/satellite: added new session expired modal
  • 2b6b1f7 web/satellite: separate out frontend config from pinia app store
  • 816c3d3 web/satellite: new upgrade account flow
  • 260b71e satellite/{console,accountfreeze}: test freeze effects
  • 4d99897 satellite/admin: rework update user limits functionality
  • efcae85 satellite/main,stripe/{client,service}: stripe balance invoice item cmd
  • b4ac006 web/satellite: sync timeout modal with user session timeout
  • 774ac50 web/satellite: migrate vue filters
  • defb9ea satellite/satellitedb: add table for project invitations


  • f076238 storagenode: run used-space filewalker as a low IO subprocess


  • f52ea27 testsuite/ui: bump gateway-mt and enable vet
  • 4058b01 docs/testplan: add Uplink testplan (#5677)


  • 54ef1c8 cmd/uplink: use new upload code path
  • d53a56c cmd/uplink/initial_setup.go: fix logic on analytics prompt



Just a short note here. Many Linux distributions by default uses the mq-deadline scheduler for HDDs. I’ve just found out that this scheduler only gained support for I/O classes in Linux 5.14 (August 2021), so this patch won’t improve anything immediately for users of e.g. some long-term supported distributions that use older kernels (like Debian Bullseye—thankfully, Bookworm is just around the corner!), or users of NASes that don’t get their kernels updated. It may be useful for node operators to explicitly check whether their I/O scheduler supports the Idle class, and maybe update the kernel, maybe switch to BFQ, which does support I/O classes.

1 Like

My ReadyNAS is now discontinued and no further OS development is coming.
It is pretty closed down so no kernel upgrade will be possible.
It is based on Debian Jessie.

Is this update likely to break something?

ioprio_set(2) - Linux manual page.

No, just the feature introduced in this commit will not be effective.

This is just an API call availability. Doesn’t mean it works in all possible circumstances. This is the commit that adds support to the mq-deadline scheduler.

1 Like

For those trying to enable it, the setting is




As mentioned in the commit. But reading @Toyoo’s responses, it likely won’t help for me on Synology.

1 Like

It’s still possible that Synology would backport patches like that to their older kernels.

Fair enough. I enabled it on my test node. It runs, for what it’s worth. But that node is so small, it’s hard to say whether it has any effect.

Wait a minute!.. there are 2 Filewalkers? :unamused:
What is one for and what is the other for?
Should we put both as “lazy”?
Man, my run command keeps getting bigger and bigger… :man_facepalming:t2:

I’m on Synology too, but maybe it helps, maybe not, I just want to understand the options.

Even four of them, if you count exhaustively.

  • Estimation of disk space used (at the node’s startup, can be disabled).
  • Garbage collection through bloom filters (each time a satellite sends a bloom filter and optimized few releases ago to avoid most of random I/O).
  • Trash collection (this one is usually small, only goes through the trash/ directory).
  • Graceful exit (only when invoked, obviously).

So those 2 settings reffer to…?

If you mean the ones in my post. There’s only one setting, the other is just wrong.


15 posts were split to a new topic: Providing options as an argument vs editing a config file

Out of curiosity, can you check which scheduler is used on Synology? Something like cat /sys/block/sda/queue/scheduler.

Already checked when you first posted. It mentioned something with deadline. Not at my computer ATM, but I can check the exact thing later if you want.

1 Like

this version dont solve problem with
C:\Program Files\Storj1\Storage Node>storagenode.exe exit-status --identity-dir “C:\Identity1\storagenode” --log.output stderr --server.private-address
2023-05-06T10:14:46.634+0300 INFO Anonymized tracing enabled {“Process”: “storagenode”}
2023-05-06T10:14:46.642+0300 FATAL Failed to load identity. {“Process”: “storagenode”, “error”: “file or directory not found: open \identity.cert: The system cannot find the file specified.”, “errorVerbose”: “file or directory not found: open \identity.cert: The system cannot find the file specified.\n\\n\tmain.cmdGracefulExitStatus:186\n\tmain.newGracefulExitStatusCmd.func1:59\n\\n\\n\*Command).execute:852\n\*Command).ExecuteC:960\n\*Command).Execute:897\n\\n\tmain.main:29\n\truntime.main:250”}

Filewalker hell… :imp:
I hope all of them run as low prio. Because it sounds like none of them really needs to run with the same prio like customer up- and downloads.

This one would be great if it would go through the temp dir as well to get rid of leftover partial files that for some reason did not get deleted.

There’s too much trouble keeping FW on. I turned it off on all my nodes. I don’t see any differences in reported space. The DBs are keeping the scores flowlessly. If you let 10% free space as recomanded, you should not have any problems. FW hammers HDD, reducing it’s lifetime and loosing races, so you loose money from node going down, replacing HDD and lost races. I don’t have any good expectations for the lazy mode either; when ingress and egress peaks, you don’t have room for anything else taking HDD time, low or high priority. Why should you keep on an optional process that affects node’s performance and cost you money?

Had a thought: maybe it would be a good idea to mention this in some Recommended section of the System requirements here?