If you can SSH into the box just give it a go.
I didn’t really have to install anything on my ReadyNAS (which is running on Debian Jessie)
They’re just single binaries, just like the identity tool you would use to create and sign an identity.
Thank you for your replies, but they don’t answer my question.
When you compile binaries usually they will use the existing libs in your OS. You can change that to include any used libs in the one binary, making it monolithic and a lot bigger.
In the SN dir you can type
file storagenode
and the file program will tell you about any dependencies. The ldd
command will also give you this info.
Sure. Results of the file
command.
opt/storagenode/bin/storagenode: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, Go BuildID=BpNGZHbVAace4Y0roiw5/955rEsZQL596yf_F6nUi/dt53R-rXM6lBcY6jCXL2/V9Wr5B7yu7L9wA3rFS4E, not stripped
Results of the ldd
command:
not a dynamic executable
Thank you! I just got the same from ldd. file
doesn’t exist on this NAS, but I can run the program. So it’s monolithic.
Now to see whether it will run in whatever is left of 512 MB RAM!
I would like to report two non-critical issues:
- On the first start storagenode updater we have it on log:
- Storagenode updater use the same option for debugging endpoint (
debug.addr: 0.0.0.0:7777
) from the config file and overlap storagenode debug endpoint
For issue two, I just add a workaround to storagenode-updater service exec string --debug.addr "127.0.0.1:0"
Summary
[Unit]
Description = Storage Node Updater service
After = syslog.target network.target
[Service]
Type = simple
User = storj-storagenode
Group = storj-storagenode
ExecStart = /opt/storagenode/bin/storagenode-updater run --config-dir "/etc/storagenode/config" --binary-location "/opt/storagenode/bin/storagenode" --service-name "storagenode" --debug.addr "127.0.0.1:0"
Restart = on-failure
NotifyAccess = main
[Install]
Alias = storagenode-updater
WantedBy = multi-user.target
Also, I would like to recommend adding standart logging to /var/log/storagenode
and /var/log/storagenode-updater
It will working for Debian/Ubuntu systems. (I using Debian 10)
mkdir -p /var/log/storagenode/
mkdir -p /var/log/storagenode-updater/
/etc/rsyslog.d/storagenode-updater.conf
if ( $programname == "storagenode-updater" ) then {
action(type="omfile" file="/var/log/storagenode-updater/storagenode-updater.log")
stop
}
/etc/rsyslog.d/storagenode.conf
if ( $programname == "storagenode" ) then {
action(type="omfile" file="/var/log/storagenode/storagenode.log")
stop
}
service rsyslog restart
And log rotation too:
/etc/logrotate.d/storagenode
/var/log/storagenode/storagenode.log
{
rotate 60
daily
missingok
dateext
copytruncate
notifempty
compress
}
/etc/logrotate.d/storagenode-updater
/var/log/storagenode-updater/storagenode-updater.log
{
rotate 60
daily
missingok
dateext
copytruncate
notifempty
compress
}
service logrotate restart
PS. I will continually monitor how it working (Linux storagenode & updater) and report here the results.
Hey @littleskunk Happy New Year!
Any idea when there will be an installer for the Linux ARM Storage Node please? (even if its not prod ready)
I’m gonna be honest, you won’t need an installer.
If I managed to get one running (and I’m not a very clever guy), then you can easily do it too
Instructions are here
I think we should wait at least 2-3 full update cycles before and report what’s going on on our pioneer storage nodes and updaters. It will helpful for the upcoming release with the installation script (standard linux installer).
As far as I’m aware it’s exactly the same software running on docker, sans the docker container… I can’t imagine there is much to report.
As you can see, I already reported one and two things. Software is the same, but how its software is running, it completely new, and I think it should be well tested before go-live to the production, this is the reason why we should wait for more full update cycles. If during these cycles nothing happened, well, just report: everything is working fine
“Everything is working fine”
Wow But only one cycle is finished
Let’s waiting at least the next two cycles I see the second cycle will come right now.
For what it’s worth, I’ve been running the native binary and updater on one of my nodes since 1.15.3 and it’s successfully updated now 1.15.3 -> 1.16.1 -> 1.17.4 -> 1.18.1.
@littleskunk I would like to report two non critical issues that appear during autoupdate process:
Updater not recognize some parameters from config file (red arrows on screnn).
However, the upgrade process went smoothly
@littleskunk I have a proposition for storage node updater service. Is it possible to add an email notification when the updater finds a new version and update storganode service (and self too)?
Just to send a pice of the log is more than enough:
like on watchtower notification:
Found new storjlabs/storagenode:latest image (sha256:03ced45de7a2ac4bc66b892c8611f60c26020004c9dd64746a167795f0b129e7)
Stopping /storagenode (c1f23a7c9982be3d32b61497a91a5684f5998e6270b63df6ee532a8f84f6cc86) with SIGTERM
Creating /storagenode
Removing image sha256:bb347b79869d3c3e975ed61b6b535e0f076f8ecc5e85a7d9938367144aa3960c
@littleskunk the second full cycle is successfully done:
have small non-critical errors, but updates and service is working fine
Those non-critical errors are just systemd artifacts for when it does the restart. The storagenode-updater sends a SIGINT and relies on systemd to restart. See the Restart = on-failure
line in the systemd service file. I get the same thing on my end.