V1.107.3 as "minimum" on https://version.storj.io

When recreating docker container an old buggy v1.107.3 is installed. Please fix it

2 Likes

It does not matter what gets installed. Internal updater will update the storagenode to the newer version in due time, within 15 min of the cursor updating to include your node.

1 Like

This post refers that previously upgraded nodes to 1.108 (as the previous rollout concluded) will get rolled back to 1.107 when recreated because the cursor was reset for a new rollout to 1.109, while still maintaining the 1.107 minimum version.

1 Like

Why is this a problem? The whole range of versions is compatible with each other.

Recreating the node is rather niche usecase to worry about, especially since the outcome is inconsequential.

AGAIN!?! Why do they mess up the minimum version change so often?

3 Likes

DB version bump since 1.108, so 1.107 is not compatible. Node won;t function.
Problem enough don’t you think?!

1 Like

If the changes are not compatible, the minimum version should be bumped.

It’s not a bug in the container.

If the post refers to what version url reports — then yes, I agree, it needs to be updated.

It does matter because when recreating the container some times you get a lower version that the node was already running.

The problem is when the new version included a database migration, because the new db schema is normally is incompatible with previous versions.

One of my nodes just went from 1.105.4 => 1.08.3 => 1.107.3

My $0.02: when the cursor hits all fffs, bump the minimum version before starting the next update cycle.

Reasoning: all fffs means that all nodes should be updated, by the very definition of “the minimum version”.

1 Like

Honest question now that I think about the subject, lets say I want to change storage from 10 to 12 TB (hypothetical). Ive always been under the assumption that when editing the start parameters in Docker, you have to shred the old container first (delete it). Is that true or am I missing something? (don’t want to hijack the thread completely :stuck_out_tongue: )

2 Likes

Wasn’t 107 pulled from deployment anyways?

If we skipped it due to problems: how can it be a minimum version now? Minimum should be 108 with 109 being staged?

(Edit: If the upgrade logic starts downgrading my 108 nodes again - and kicks off new multi-day used-space-filewalkers because if it - I won’t be a happy camper :camping: )

3 Likes

This is what the advertised minimum version is supposed to address. When incompatible change is rolled out, the minimum version shall be bumped.

I don’t think it’s possible to add another fail safe mechanism - such as storming last known minimum version 11 to node data to prevent such downgrade and still allow range of versions: source of truth is still the advertised minimum version.

This is in no way ideal: the problems seems to be that the minimum version value is being misused for both, downgrade gating and satellite compatibility gating.

For example, if satellite rolls out some change that makes existing nodes not compatible — everyone must update right away, gradual roll out is out of the window, minimum version is set to the current one. Hope this never happens.

If, on the other hand, one of the node releases introduces non-backward-compatible change — it is still possible to keep using old node, but if you upgrade — you can’t downgrade past that change.

For example, satellite may support nodes versions 60-80, but e.g. db schema changed in version 70. Minimum version shall stay at 60, nodes that are 60-69 can continue running and update to any version between 60-80, but nodes that are 70-80 m now have to be confined to that range, inspite of minimum version is way below.

This requires a nontrivial tracking, both on the server and node side, and is not done likely because benefits are minuscule and affect insignificant number of operators, while introducing extra complexity.

Current approach, conflating client and server changes in a single minimum version marker, albeit confusing, is simple, and works well enough

Re-creating storagenode container shall not be happening frequently enough to worth worrying about.

I do, however, have a simple workaround to offer: storagenode container needs to persist the binaries it downloads on an externally mountable volume, just like any other data. They shall not perish when container is destroyed — because they are in essence just another data.

It can be done either by storj upstream or by node operator by adding appropriate mount points.

This will prevent downgrade from happening and resolve this issue.

1.109.2 breaks again the bandwith graph, seems that the data is cached again

Looks like v1.108.3 is now minimum. Now that collector properly updates the deleted pieces after a run of used-space we should see far less disk usage discrepancy reports on the forum.

Interesting this is not the case for me.

Thank you Storjlings! :kissing_heart:

1 Like

Not sure if I should create a new thread on this, but don’t update to v1.109.2 there is a bug that affects reporting of storage during garbage collection: 1.109.2 apparently not updating trash stats · Issue #7077 · storj/storj · GitHub

Someone else already created a 1.109 release preparation thread, perhaps post it there? Release preparation v1.109 - #3 by Roxor

2 Likes