same
looks like this explains why itās going all crazy right nowā¦
they are migrating the satellites cluster and the accounting system is being replacedā¦ so no wonder its acting up. you can read about it in the new change log
Yea, saw that. Not happy about being part of an unclean migration, but understand it can happen. Just not sure though why there wasnāt more testing to develop a fix to mitigate this- I mean Iād hope thereās a CI/CD setup specifically for making sure things arenāt breaking too bad.
well itās to fix the issue we ponderingā¦ or thatās how i understand itā¦
and itās not critical infrastructure for the nodes, meaning it really canāt go bad, from our perspectiveā¦
and when implemented maybe this disk space used this month graph will finally stop being all weird because the new system will not require the same level of computation that the old system didā¦
ofc itās critical for tardigrade that there is no double spendā¦ but even if there was for a short timeā¦ it doesnāt matter to muchā¦ ofc i suppose in theory the worst case would be tardigrade denying uploads if the system truly failedā¦
and the reason itās affecting the space used ā¦ graph might just be because the cluster / satellites are doing a lot of processing right now to convert into the new system or whateverā¦
i duno how it worksā¦ but it doesnāt worry me one bitā¦ i cannot imagine this affecting SNOās much
iām more worried about when to updateā¦ i skipped the last update because there was a good deal of people having issues with the new orders.db or whateverā¦ in 1.16.1
so hopefully i wonāt get into trouble when going from 1.15.3 to 1.17.1
and hopefully more SNOās wonāt suffer when updating from 1.16.1 to 1.17.1
I must have missed the bullet on that one- 1.15.3 to 1.16.1 went fine and I donāt imagine major issues into 1.17.4 - that being said, stranger things have happened.
from my understanding it was only an issue that hit very old nodes maybeā¦ something with some of their orders being from an old versionā¦ so in theory i should be okay to just updateā¦
but saw more than a few affected on the forumā¦ ofc thats how it goes when updating software infrastructure or whatever one calls itā¦ core processes, functions, thingamajigsā¦
most likely not be a huge issue since we didnāt see a new patched release, something which i sort of expectedā¦ found that a bit surprising, but maybe there was no fix for it and if 98% of the nodes where updatedā¦ then the damage already had happenedā¦
1.17.4 ā¦ i think i need to read up on version numbers againā¦ why is it .4 and not .1
i mean we went from 1.15.1 to 1.15.3 (because of a mistake) then we went back to 1.16.1 and now we go to 1.17.4
either i canāt count or whoever is in control of these version numbers canātā¦
Here are the āmissingā releases:
Not every version is released outside Storjās own development.
Overall, version numbers are pretty arbitrary. Chrome is on version 87 for example. Itās whatever works for the devs.
i just donāt really understand why public released versions wouldnāt be sequential, sure itās more or less completely irrelevant, ofc there maybe some advantage to using the same versions for development as for public releasesā¦
i was just wondering if there was something i missed about understanding how to read the version numbers, i can ignore the last number ā¦ if it helps the devs
just have to remember to write 1.17.x instead
does that mean devs have x factor?
This about why we rollout to SN different patch versions on each minor version.
Let me give some context about SN rollout.
SN rollouts take time because we release them in ~24 hours steps that make the new version applicable to an increasing % of storage nodes. Currently, the steps are: 5%, 10%, 20%, 40%, 80%, 100%. Weekends and official company holidays are excluded from this cadence, which means that we donāt promise to follow the next % rollout during those days, although we sometimes do; when some rollout isnāt finished and runs on any of those days, the rollout continues the following business day.
The storage nodes which follow the cursor are the ones that use the storage node updater which, currently, are the ones installed through or Windows and Linux installer.
The ones using the Docker images and Watchtower are updated when we publish the new Docker images and Watchtower executes the check. The Docker images are just published right after we deploy the 100% step.
But, why do we only roll out some versions?
Because sometimes we need/want to release some Satellite updates and avoid selecting all the SN changes that we want to release for the next release and other times, there arenāt any new changes for the storage node so we wait until we have them to create a new release.
Considering that some Satellite updates are released in a pretty short period of time, just hours if we start a new SN release roll out on each version, we would end up that SN release may never catch up with the last version due to our ~24 hours percentage release commented above.
I hope that this comment clarifies, why SN releases arenāt consecutive.
My stats this month show this curious drop in storage usage.
How can it happen?
I havenāt had downtime. And if it were due to somebody deleting and reuploading, I would have seen the effect on bandwidth (and bandwidth usage havenāt skyrocketed).
Thatās what happened to everyone
it doesnāt graph disk space used, itās graphing satellite storage calculation speedsā¦
you can basically super impose rmonās graph on top of mine and see the exact same thing.
nearlyā¦ the only thing you can take away from this graph is the average it gives, it could be represented in a single line of the avg over the days and it would be more useful and accurateā¦ lol
makes me sad and angry every time i look at itā¦ itās just mass broadcast confusion.
just look at total disk space used space and then generally thats about what youāve had for a monthā¦ and what you will get paid forā¦
ofc the less data stored the less accurate it will be because it can change rather quickly, like in dada181ās caseā¦ because itās a new node.