yeah 1.16.1 with watchtower working
My apologies, I was wrong, the source path is the same, I was confused because the panic doesn’t come from the
make function and I thought that it was when I investigated the case in the other post and I didn’t compare both log files line by line.
On the other hand, I’ve found that this could happen even in 64 bits architecture but the cause is different although the crash comes from the same part.
While in 32 bits architecture could be due to getting a negative length passed to the
make function in 64 bits that cannot happen but what can happen in both architectures is that if the length is a very big number the
make function panics with
panic: runtime error: makeslice: len out of range
This panic is different than using a number which isn’t that big but overpasses the maximum memory size available in the system, which is
fatal error: runtime: out of memory
I can say now this because I’ve found https://github.com/golang/go/issues/38673 and I tried with a minimal main file that executed to get both different panics mentioned.
The good think is that
v1.17.4 had a commit to fix the problem that could only happen in the 32 bits architecture, but later one it was another commit that limits the number to be of a reasonable size (https://github.com/storj/storj/commit/41d86c098576922afaf8002b060ba83f2b5fd802) and the commit also landed in the
v1.17.4 should fix this issue too.
at least for now moving the files seems to have solved the problems (the node is now 2h 36 min without restars) but can’t be sure tillsome days have passed.
You are safe, as far as no other corrupted order file is newly created, if that happens you have to proceed as before, until your node gets updated to
Note we are rolling out
v1.17.4 and Docker images should be published in less than a week from now.
the weird thing about this is that I have 2 nodes runing on the same raspberry,and only 1 of them have had this issues
It isn’t weird at all. Basically one of your nodes has created a corrupted order file while the other hasn’t.
I 2 nodes running and I’ve been lucky that they have never had this problem.