Changelog v1.24.4

For Storage Nodes

Undistributed Payout
On the storage node payout dashboard, we have added a few information regarding distributed and undistributed payout for the current and any previous month.

zkSync Transaction ID
Depending on which payout method you have chosen the payout history on the storage node payout dashboard will link to the zksync block explorer or etherscan.

Multinode Overview
Last release we introduced the new multinode dashboard. Remember the installation is still manual: [Tech Preview] Storage Node Multinode Dashboard
The multinode dashboard can now give a basic overview of used space, free space, bandwidth, and the storage node version. The payout information on the multinode dashboard is currently not correct. For now please use the payout information from the old storage node dashboard instead.


what about of return audits count to API?


So we skipping v1.23.4 ? What’s the implications on Ingress to nodes from a satellite as minimum supported version - I’m only asking as I track Github release, and if we are going to start skipping release numbers I’ll have to track releases another way so I can manually upgrade.

so from a satalite point of view;

is 3 versions old from v1.24.4, (actual release) or (int count), sorry for my English this not meant to be negative, I just wondering.

1 Like

The only thing that matters is the minimal version on here.

ah thank you, so indeed it is int count - that going to catch allot of SNO’s out… Will update to poll that URL in future.

I haven’t checked. I don’t see it in the commit history but it might have been hidden in another commit.

Yes there was a v1.23 release. We used it to iterate quickly towards migrating the old satellites to the new multipart upload code base. That was executed on SLC and allowed us to discover and fix a few bugs. Over the next few weeks we are going to migrate all satellites.

In general this can always happen. When it comes to deployments my biggest concern is that we screw up the satellite migration or the rolling upgrade. There is room for a bunch of errors that our unit tests wouldn’t cover. We do have a rolling upgrade test in place. That test is sensitive for version numbers. In order to get accurate test results we sometimes have to push a new release. With a point release we might bypass the rolling upgrade test and that can have some very bad consequences.


Do you happen to know what the contents/format of the receipt field would be for zkSync payouts. I’m going to have to make some adjustments for this in the earnings calculator but haven’t switched to zkSync myself yet, so I would need an example.

Edit: Never mind, found it. Prefix will simply be zksync instead of eth. Simple enough. Looks like I’ll also be removing some warnings as there is a database migration included to fix missing distributed amounts for stefan-benten. Nice! That’ll clean up some historic misreporting.

I believe in the storage node database you would find something like this: zksync:0x260abd6959105bd8ad078d0d62563016d545e9a7c753cfed50178fec7246420c

The dashboard will link:


Edited my previous message just too late. Thanks though!

Thank you for detail, I now understand and it makes reason.

I would not want to be one applying sat code updates, very scary you do good - will be much better when cockroachDB deployed and Sats are global.

I believe we have that already. Maybe not on all satellites because they get migrated one by one. It doesn’t change our currnet deployment.

So lets assume we would have the “single point of failure” satellite with a postgres db in the background. The deployment process for such a setup was:

  1. Upgrade DB to the new version. Old API and core instances must be able to run with the new DB.
  2. Upgrade the API instances one by one. Hopefully the customer will not notice that the API instances are getting replaced with the new version. There should always be at least 1 API endpoint online.
  3. Last but not least upgrade core and any other process that is left.

Now the only difference in a multiregion satellite is the location. The deployment remains the same.

well actually according to previously announcements it’s 3 and pretty strict…
tho it does sort of become 4 because it’s 3 updates they count… and outside is DQ

so since when?
and why isn’t there like an easy place to get all these types of critical information…
kind of ridiculous we can’t even agree on what version we can be on before we get DQ.

should be pretty basic easy to check stuff…

1 Like

Thanks again! Update is live!

1 Like

You can see it on the dashboard, when you hover mouse over version number

that makes no sense…that version number has always been much lower than what this announcement told us was the lowest version else we would get DQ.
even when the post was made only 4 months ago… how are we suppose to know what do when the information given out in such a short timeframe is conflicting.

so we should just disregard what that post says?
and why would it even be written if it was wrong at the time it was written, was it just suppose to inform us about that version DQ was turned on… and the version thing is just how it will become at one point…

1 Like

I do not see any conflict - the lowest version of storagenode to join the network is 1.13.0. But minimum version to run a node is 1.22.2

Today I Learned that the 1.13.0 in the version mouseover is the minimal allowed to join the network. Would still like to see the minimum to receive ingress (currently 1.22.2) included in the mouseover.

1 Like

but a node with version 1.13.0 that joins the network gets DQ,
according to that statement, but maybe thats what you mean by joining the network…

or are you referring to that joining the network doesn’t mean on gets data
and running a node means that the node can get data…

but then again… why is the post giving out incorrect information at the time it was created…
since it can then still be more than 3 minor updates without getting DQ

if joining the network means you get DQ below this limit… maybe it should have a better name…
since not respecting that limit will destroy the node


I am still on 1.21.2 and have up- & downloads so far.
I think it is when the docker 1.24.4 version is released.