Why is the 1.109.2 release so large compared to the v1.108.3? It is 70% larger… Even 1.108.3 was a noticeable jump already.
I didn’t even realize we don’t have the normal 1.109 release note post: and it’s already being rolled out? Hmmm…
Maybe they packed in a Valdi client to borrow some of our GPU time
(yes, mods, that was a joke )
There appears to be a bug that affects reporting of storage during garbage collection: 1.109.2 apparently not updating trash stats · Issue #7077 · storj/storj · GitHub
I recently updated a node to version v1.109 and this error started appearing in the log:
2024-07-31T09:21:38+02:00 ERROR piecestore upload internal error {“error”: “manager closed: unexpected EOF”, “errorVerbose”: “manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:230”}
Do you see this? It seems like a lost race, but it’s the first time I’ve seen this and it’s like an “upload internal error”
It’s already in the 1.108 version.
I am seeing the same error message like you do.
However it seems different to this one: Node keeps exiting randomly (upload internal error)
So maybe the wording for some errors has been modified.
You are correct these type of errors appearing in 1.108 too, just places where the upload canceled differ, thus different messages.
What’s happening in the linked thread appears to be a FATAL Unrecoverable errors somewhere earlier. I didn’t find panics on my nodes so far.
2 posts were merged into an existing topic: Cancelled audits
This is a new log entry, but the code was handling this error type for a long time. 1.108 just started logging it.
The answer is simple. All the QA engineer have been on vacation at the same time. So there was nobody posting it in the forum. We could have prevented that but it is kind of low on the priority list. I am very happy that everything else in the release process has worked out just fine.
I thought those topics were generated by automation.
Reports are generated automatically on GitHub, but we do not have any bots integrations so far (except posting a link from the thread to the GitHub issue/commit/PR where it’s mentioned).
1.109 seems to be getting deployed quicker than the last couple updates. Whatever manual gates there are in the process… we seem to be flying through them. That’s a good sign!
My first windows node was waiting for more than 2 weeks for 108 (from 105) It used to be the same with the earlier updates, but received the 109 on the first day. I was a bit surprised, when I saw it.
Why is there no thread for v1.110? It is available and I already installed it to some nodes.
Apparently the guys who post the release threads are all on a beach somewhere drinking rum out of coconuts.
Because it wasn’t selected as a used release. Only releases they decide to deploy get automatically deployed, some get skipped.