Changelog v1.26.3

For Storage Nodes

zkSync on Dashboard
In the previous release, we added a link to zkSync in the payout history. Now we have also added a link on the main dashboard right below the payout address. You can use that to verify that you have opted into zkSync.

Downtime Suspension
As a reminder downtime suspension has been enabled: Enabling Downtime Suspension, March 22, 2021

For Customers

Free Tier
New customers now get 3 projects with 50 GB each as a free tier. That is a total of 150 GB of free usage every month. The messages in the satellite UI are not all updated yet. We will correct that with a following release. We also plan to give the same free tier to all existing users with a following release.


@littleskunk Could you please tell me more about this feature?

I can give a short overview what we found out so far.

QUIC is not as fast as we thought it would be. We added a feature flag into the uplink binary in order to double check our test results in production. I would not recommend any customer to use it yet.

We are also looking into some similar UDP based protocols to get an better understanding which one works best for us. So short version: More research needed.


Thanks a lot for your explain! I always like your clear explanation :slight_smile:


I tried to update my node on Qnap, but the watchtower doesn’t work and the manual update reinstalls me 1.25.2

I think you are new. Storj implements rolling updates so as long as you got watchtower it will update automatically. GUI nodes are updated first then docker nodes.


Thanks to the immediate response. Yes, I’m new.


Hey I manually upgrade my nodes and the Doesn’t seem to work as the file within isn’t .exe and even when I overwrite it windows sees it as not signed and doesn’t allow it to execute

Oh nice. That happens if you open a PR and just forget to merge it: jenkins/build: fix file extension for msi package by littleskunk · Pull Request #4063 · storj/storj · GitHub

I corrected my mistake. Please try again.


how that we are getting penalized for being on version v1.24.x
since 20:00 UTC last night, maybe releasing v1.26.x for docker would be appropiate, so that those of us doing manual updates don’t have to update to v1.25 only to update to 1.26 within another two weeks.

i think it would be a bad idea to stop us from being able to update on a monthly schedule without being penalized for it… i get that it’s to speed up development, but it also doubles the work with updating… if it has to be done every two weeks minimum…

anyways just asking for somebody to please press the button to release the docker version of 1.26.x would be swell, seems stupid for me to update my nodes to 1.25 and then tomorrow update them to 1.26
and it’s been 16 days since the last update so should be about time.


I also do manual updates and it takes me 5-10 minutes tops. Am I doing something wrong? How can it be too much to ask to do that every 2 weeks.

1 Like

but what if you was out of town, or had some kind of other issue that prevented you from updating…
getting punished with no ingress skipping a single update, imo seems a bit harsh…
sure updating doesn’t take long, but i also have more than just a single node…
and right now… we are like … lets see

this post was made with the release for v1.26.3 for windows on the 7th of april.
and release for docker is usually a week after, meaning right now.
with my 10 nodes 3 of which are already updated, i am basically forced to updated them if i want ingress for today.
which is according to your estimates 50-100minutes of work… tho i’m pretty sure i can do that in less.
and then tomorrow the update of 1.26.3 goes live for docker and then i can do the whole thing again.

sure i can just wait, but the matter of fact is now matter how your twist and turn it that to maintain my nodes, even if i was having them auto update i would have to check that the individual nodes was running correctly after the updates.

aside from that we was told that we would be allowed to be two updates behind…
keep in mind that if you go outside the punishment the node supposely gets DQ
when the DQ range moves before the update moves and not the other way around.

people that may have wanted to update could get DQ.
duno if there is an actual warning for that… there really should be… aside from ingress stopping.

yesterday the minimum allowed version was 1.22.0 and current version was v1.25.0
today it’s 1.24.0 and current version is v1.25.0

so if i had been running on the tail end with v1.22 and wanted to updated to v1.26
i would now get DQ 1 day before the newest version is released… instead of after…

i think that is a problem… should be the other way around… a few days after the update, people can then get DQ if they are out of range.

i could also ask if v1.24.0 is the minimum, then why am i being punished when running v1.24.3
it’s a mistake… obviously


Right, the actual punishment should happen some short time after the minimal version moves to a new value, to give SNOs time to react and to avoid these issues. Say a week after everyone is able to update?
Also, caveats about the old version should be clarified in the tooltip, for everyone’s convenience (no uploads after version…; no qualification after version …).


While I completely agree that updating required versions should happen some time after a new version is released for ALL platforms… It’s very much possible for a storagenode to have a bug that manifests if you skip an update, so from safety point skipping versions is potentially unsafe.


Do you have proof of this or is it conjecture?

my node on 1.24.3 stopped getting ingress after the dashboard changed to 1.24.0 as minimum required version… so i updated to 1.25.2 on that node and ingress resumed.

my remaining nodes on 1.24.3 is still not getting ingress until after i update them…

to me that basically proves it… i wouldn’t say its 100% sure but maybe like 98% which is kinda good enough for me…

not sure how i could verify it better…
the graph is also pretty clear… the memory spike was after i rebooted the node, its a big node so draws a bit of memory at times.

node running v1.24 (and then updated)

nodes running v1.25 since well the timer reset was when it was updated.

node still running v.1.24 tho technically not fully vetted, have already updated those nodes…
but it’s dead basically dead… funny tho that it’s like not all repair is affected…
but repair most certainly also does decrease.

not sure how much better one could really verify it…

This has to do with how the update® itself works, the node is supposed to update itself internally in a safe manner (which is visible when updating the database for example, which is versioned). If you are allowed to be more than 1 version behind, then updating by an amount of 2 versions is supposed to work, otherwise you either need to brick all the nodes older than 1 version old, or you need a mechanism to make the nodes update 1 version at a time.

Note however that the registered trademark sign is not a registered trademark sign but is a registered trademark sign because software here interprets what I’ve written there as a registered trademark sign.

Disclaimer: I have no objection with Storj registering it as an actual registered trademark.

1 Like

If your update process is sane, which I assume is the case with storagenode, then there’s really no reason to update 1 version at a time. You just carry all migration code you need to update from the earliest version and only execute the parts you need to update to latest version.
But if there’s bug somewhere, for example the update depends on data generated by one of the previous versions in runtime and fails if the data is not present, then multiple migrations in a row may result in an error and possibly a broken db.

If the migrations are sequential internally as we’ve been assuming then for all intents and purposes it will update an equivalent of one version at a time internally. Naturally, there’s always a chance of error that crashes the node, opens a wormhole or deletes your kittens. But in any case, it should be able to perform at least a 2 or 3 version jump without issues, more and I would rather update sequentially manually, if the node wasn’t disqualified by some miracle.

Hopefully the wormhole leads somewhere nice…

1 Like

new docker image is available
first node updated via watchtower