Changelog v1.50.4

you where right…

i must not have waited long enough…
don’t have the nodes on restart unless stopped, so i ran the docker run command and it terminated because it couldn’t talk to the updater according to the storagenode logs.

so shortly after maybe 30 sec or a minute i ran the docker start storagenode command and it again ran for a brief moment and then stopped again so i went back to latest.

today when i tried to start the node using the “docker test image”

it started up without a hitch… i simply must not have waited long enough.
good catch…

so all okay from my side… tho it might be smart if the storagenode doesn’t stop because the updater isn’t done downloading yet… but that seems a bit like a minor oversight that really doesn’t change much and which will eventually be fixed…

tried to update another node in a different container, and it didn’t behave like the first one…
but it did pull the image again ofc, so maybe the image was changed… or my zfs did some caching magic which made the updater instantly update…

It looks to me like both the storagenode and storagenode-updater get killed at the same time at some point and then the container stops running because nothing is running inside it anymore. I’m not sure if that’s by design, but if so that means it really relies on the --restart unless-stopped option to successfully complete updates. And containers without it may just stop instead.

i’ve had --restart unless-stopped off for a while…
had some now resolved issues which nearly took down my system, not sure how easy that would have been without the option to manually start stuff.
ofc i could just control that over my proxmox containers instead of docker… so meh.

mine started now and no sign of the issue…
will leave it and mine only shutdown because the storagenode sent a termination signal…
but yeah i duno if that would be sent when the storagenode binary exits… i suppose its possible, i guess they already updated the version…

wonder if they would progress the version, if they changed the storagenode binaries inside the image… i mean they could in theory already have fixed it without us knowing it…

if the entire update cycle happens within the binary or whatever

anyways… my setup seems to run the new version fine now…
not sure if something changed or not… or if its just zfs being smart

Such a regular update has already happened on the testnet nodes. And I can confirm that the behavior is the same and the container restarted.

I did not touch it 17 hours ago. So --restart unless-stopped is no longer optional.


i guess i better add the --restart unless-stopped function back to my docker run commands.
when i update after 5f2777af9-v1.50.4-go1.17.5 becomes latest.
just to be on the safe side for now, don’t really have any alerts running for if it goes down lol

It would be nice if some of those INFO lines regarding the updater, which I am guessing are only visible with log level set to debug, would make it to the normal log level. I didn’t see anything in the normal logs explaining that the updater was running. Or perhaps some more appropriate log messages like “update found”, “starting update”, etc.

Nah, that’s not it. The log that is redirected to a file only contains the log of the storagenode process. The rest, which you are seeing here, is the docker container log. This used to be empty if you redirected logs to a file, but now contains the log from the entrypoint script in the container, supervisor and storagenode-updater processes.

I’m pulling it from the synology interface here, but you can also see this log by using docker logs --tail 30 storagenode or whatever your container name is.


Uh, right, makes sense. Will need to bring back my docker to journald logging, then, the updater logs will fit there nicely.

If I don’t miss something the issue that I found on the Raspberry PI with the libseccomp2 package affects all the SNO’s that run on Raspberry OS, I have to manualy update the package to let the updater works.
How do you deal with that without invite all the affected SNO’s to do the same?


I think these nodes can be skipped until there is an image replacement for arm32 nodes.

Thankfully to your report we would give a support to node operators with arm32 nodes how to update them to continue work normally.
I do not have exact plan on my hands, but perhaps the next release could have a fix for arm32 issue.

OK so far for my Intel based nodes,

No image for my Raspberry running in 64 Bit mode available?

pi@pi-hole:~/storj/nodes $ uname -a
Linux pi-hole.discworld.intern 5.10.103-v8+ #1529 SMP PREEMPT Tue Mar 8 12:26:46 GMT 2022 aarch64 GNU/Linux
pi@pi-hole:~/storj/nodes $ uname -m
pi@pi-hole:~/storj/nodes $ docker pull storjlabs/storagenode:latest
latest: Pulling from storjlabs/storagenode
no matching manifest for linux/arm/v7 in the manifest list entries

The linux/arm/v7 is a 32 bit image.

Please try:

docker stop -t 300 storagenode
docker rm storagenode
docker image rm storjlabs/storagenode:latest
docker pull storjlabs/storagenode:latest

If it’s successfully pulled - run the node as usual.

pi@pi-hole:~/storj/nodes $ docker image rm storjlabs/storagenode:latest
Untagged: storjlabs/storagenode:latest
Untagged: storjlabs/storagenode@sha256:4453c04c31a4d1f9d2cde1def7c060167839e40ad47c3b85773cf0feea447d97
Deleted: sha256:7f53bb91e460d0726bcb2e2c25fede7c16f4fb3dd6e4dc23040a2706b9194a98
Deleted: sha256:d106212781c1a8ea1925ded7a8033f593c46734ef4f7c0c7317ab258f1119ed4
Deleted: sha256:4e2e8e6576078609f9ce196ba787996593d91a7e90f7b1f0deb481caf578759f
Deleted: sha256:af360dea2572e3bd0768b6b946622ccf282e09a57af2af6e4b095ff3287397d6
Deleted: sha256:297ecc4591d13a2d961ce82409a0189fb6be2c3c5a34980669c989a369fbf4ce
Deleted: sha256:8ab62cb608752f090ed792ffe4ee6b6894dc90036d9d38d11e96afb0bb4741b1
Deleted: sha256:baeb230cf1e96ef50b85157080ff9b4f25e884d8971640a8cffa357ce5132611
pi@pi-hole:~/storj/nodes $ docker pull storjlabs/storagenode:latest
latest: Pulling from storjlabs/storagenode
no matching manifest for linux/arm/v7 in the manifest list entries

Now i don’t have an image to run. :zipper_mouth_face:

1 Like

Isn’t aarch64 amr64/v8? I wonder why it’s trying to get a v7 image. While it’s not running, does this image work for you?


Or otherwise perhaps:

Maybe that can at least get your node up and running again until they fix the latest image.

1 Like

With arm64v8 i got this warning:

WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/arm/v7) and no specific platform was requested

and the nodes were in state ‘restarting’

arm32v6 worked and the nodes are back online.

Thank you

1 Like

Ahh, good to see the v6 image works. That gives you some breathing room until v7 is fixed for the latest image. This one will probably work for a long time now anyway, since the updates will just happen inside the container. So there isn’t a real rush to go back to latest.

uname -a shows the 64 Bit kernel

pi@pi-hole:~/storj/nodes $ uname -a
Linux pi-hole.discworld.intern 5.10.103-v8+ #1529 SMP PREEMPT Tue Mar 8 12:26:46 GMT 2022 aarch64 GNU/Linux

The switch to 64 Bit is made with an entry in /boot/config.txtarm_64bit=1
But this only activates the kernel, the rest of the system is still 32 Bit.

Perhaps this leads to the detection as an 32 Bit system?

An Raspberry OS with 64 Bit flavour is available now, perhaps i need to create a new install with this.

The storage node rollout is finished. The docker images have been pushed as latest except arm32. The arm32 nodes will stay on the old version for a little bit longer. We are working on a fix for arm32:

Thank you everyone. With your help we have been able to identify this issue without crashing too many production nodes. Last but not least don’t forget to switch back to storjlabs/storagenode:latest