Storagenode updater with more manual control (brain / SNO storm)

I use manual update… and there are quite a few others of us…

i’m not adverse to automatic updates or such, but to ensure better uptime i prefer to be around when my software is being updated, so i can ensure that everything is running correctly before moving on.
this also allows me to update a few days later than most others, so i can check the forum get a better idea if an update is functioning as it’s suppose to.

initially without having used watchtower / storagenode updater at all, my idea is to add a button or something akin to that, which would trigger an update within a short time frame (mainly to ensure to many storagenodes doesn’t shut down at the same time, so there will be some sort of coordination)

a scheduled option was also suggested… tho the first real question seems to be how do we communicate with watchtower / storagenode updater
would be very practical in some cases if it was on the dashboard / SNOboard
but since not all use that and some post that online, that may not be a good solution, even tho it would work fine for me.

i’m not aware of what features are in watchtower, but something like ability to send emails should be fairly straight forward, that would be an very well adapted cross platform solution that is very non local and could then also work as a way to control the updates, so watchtower asks for permission over an email if it may or when it can update the storagenode.

i duno… so it seemed a bit rushed to create a feature vote on this yet, but i and others think’s it sounds very useful, so this is to get some mor perspective on what could work and how it would be used or if it would be useful…

so lets discuss… and when we got a proper feature idea we can branch it off into a feature request vote.

the emailing / emailed commands to watchtower could work, and “should” be pretty easy to implement, tho keep in mind i got next to no clue about how watchtower works… so very much guessing here…

This sounds like a good job for a python script! You would probably find some way to check the repo for docker updates (requests+bs4), if there is a update, email you (ezgmail), and if you respond yes, run the update with subprosess. You could do it in other languages, too, and cron would be easy to set it to run every hour or so.

It’s not simple to send email anymore.

Most corporately run “free” email services are a pain to work with now. I seem to remember gmail now requires an application authorization and associated key.

Any application that is available for installation will include the application password for gmail access.

I run my own email servers, so it’s very easy for me… I simply add a new user.

It’s much easier to setup Push Notifications. However, a really neat solution is adapting IPFS pubsub for notifications.

https://pypi.org/project/EZGmail/

Once you have the credentials.json file, the first time you run import ezgmail it will bring up a window asking you to log in to your Gmail account and allow “Quickstart” to access it. A token.json file will be generated which your script can use to access your account.

I haven’t used gmail in more than a decade. My experience with emailing notifications is that the login credentials are available in plain text at some point in the email process. Would you happen to know if the credentials.json and token.json files are completely stripped of human usage information?

@anon27637763
I have used the library before and it’s really easy to implant. You just go to a few google websites and download a file. Once you download the token file, it has an application unique key. If I remember correctly, you also need to follow an authorization link the first time you use the library on that machine to verify the credentials for that machine. Even if someone got your keys, they would also need to verify it with your Google account on their machine, so they would already have your password before this, making taking control of it this way useless. It’s still vulnerable to remote takeover of your machine, though. I have a separate account just for automation, just as an extra layer of caution.

I believe there is also the option for three legged oAuth, but that would require a separate library.

Edit:
The keys are in a separate file, and they are similar to a set of API keys. They are transmitted over SSL to google, so once it leaves python, only Google or a quantum computer can get it.

A must have for me is definitely a definable schedule, e.g. in a configuration file so that I can make sure I’m available during the update so I can monitor the node startup.
E.g. Update every Saturday or Sunday between 8am and 8pm.

Other option would be to make it a simple update script that has to be run by a cronjob. Then I’d have full control over when it tries to update. This however might not be in the interest of Storjlabs because too many nodes would update at the same time because people might prefer to choose times like 8:00 so the updater script would need some kind of random delay in it making an update at 8:00 an update randomly between 8:00 and 8:05 for example.

1 Like

i wasn’t thinking anything that advanced really… i just wanted the watchtower to host an email server for it… but i duno if thats a good or easy idea… i think even my server can send emails… not sure how exactly i think it’s email service is built into the IPMI or something like that… so that my watchdog can sense emails even if the server OS is down.

haven’t really used it for anything yet or knows if it still works…
anyway watchtower could simply host something like that, it’s own email service and then maybe they could ping other storagenodes, so they essentially becomes uptime robot for each other.
and if there is a problem they send an email…

which would be easy to catch somewhere else and then covert into sms’s or whatever one wanted…
i yeah i duno… and yes i think each watch tower should be its own email server or a node in a collective email server to control the nodes.

why would you want to set it on a schedule tho, doesn’t that just makes you forget at one point… it’s why i want a trigger for it… im fine with that it asks me to update… but thats basically just to verify that i’m around… so i can fix issues.

if i would run it on a schedule i know i would before long not be around and i might as well be running auto update, but maybe thats just me not really being organized

the easiest way to support email notifications is having the user configure an smtp server. Creating an email server on the user side is hardly possible because of the dynamic ip address which many providers block, then you’d also need a certificate as many providers block email servers with wrong/missing certificates, also some ISPs might block the email ports etc… Only reasonable way is specifying an smtp configuration.

If you forget then that’s your problem :stuck_out_tongue_winking_eye:
For storjlabs it’s important that it updates at some point. So a schedule would update immediately.
If you need a confirmation, you have more control but the updater would become a lot more complex. I was just providing the easiest possible solution.
(However, even with a confirmation the updater would only wait e.g. 7 days and then forcefully update. So if you forget for 7 days, it would make no difference :rofl: But it’s more difficult to forget for 7 days while receiving an update notification every day.)

well until recently the oldest acceptable version was 1.9.5 whatever
thats what nearly 3 months ago…
maybe one should do so the watchtower updates to an older version… so it’s like one step behind, which would ensure “passive” run nodes would get the most stable updates… because they are verified by all the active people…

i think that would be a must… running an older stable version on watch tower updated nodes, for stability.

but yeah i agree it should auto update eventually… just maybe a bit slower than one would think… and then those hunting for new features can push the update early and be testing if the new releases are good and because they are more active it’s less likely to cause un noticed problems

At one point there should be a “stable” branch and a “beta” branch you can choose to update to.

Regarding email notifications, I think the simplest solution would be to have a newsletter with updates. Actually two newsletters: one for Windows, another for Docker. Opt-in, whoever wants to be notified will be notified. It used to be a popular practice for open-source projects to have “announce”-style low-traffic newsletters with notifications about new versions, especially security updates.

Regarding manually triggering the update itself—I’m already using a shell script for that, so it takes me few seconds tops.

1 Like

I use my own made updater for windows, it update all my nodes on windows automaticly as service and hapy with that.

1 Like

Port 25 is usually blocked by most US ISPs. I’m uncertain about European ISPs.

There’s nothing stopping anyone with a public IP address from setting up an email server. The problem is getting email delivered on the other side. Dynamic IP address blocks are usually added to blacklists by ISPs. So, if a home user Internet client sets up an email server, all outgoing mail will be immediately rejected by all corporate servers.

I use postfix, so this line in main.cf will reject almost all home run email servers on dynamic IPs:

smtpd_client_restrictions = reject_rbl_client cbl.abuseat.org


About TLS for email services…

The requirement for TLS for server to server communications is opportunistic and unauthenticated… meaning a self-signed cert works just fine and will not trigger a warning on the recipient side.

If one is running a POP3 or IMAP client to the mail server, that service will require its own TLS cert… which could be the same cert, but doesn’t need to be the same cert.

IMAP_Client → TLS (B) → postfix/exim → TLS (A) → Internet → gmail_incoming

The TLS (A) cert can be self-signed. The TLS (B) cert should not be self-signed.

I use Let’s Encrypt for all certs… which makes everything easy. Life was much harder when Domain Level (DV) certs were harder to acquire.



However, running an email could be as simple as:

apt-get install postfix

The default configuration is not terrible, but not very good. Default config will probably get broken into in a few months.

Most important thing to remember to do on all new domains is to put a “reject” DMARC policy in DNS. And only change that if and when you setup your mail services. A domain’s reputation is judged on how easily it is abused much more so than actual abuse originating from the domain. The DMARC reject is important even if you never intend on sending email.

DMARC DNS record information:

https://dmarc.org/overview/

yes that’s what I meant.

This isn’t neccessarily true anymore. Especially in Germany/EU more and more corporations require stronger verification. The host address has to be correctly resolvable and the certificate of the mail server has to match the hostname. No more self-signed certificates.

There are several options you could look at.

  1. use https://www.followthatpage.com/ and put in the url https://version.storj.io/ you will then get an email when there’s a new version available (since the content will change) you can then update the container a few days later, you will have to remember this ofcourse.
  2. watchtower has a readonly option, it will write to the logfile when it wants to update a container, look for WATCHTOWER_MONITOR_ONLY on this page https://containrrr.dev/watchtower/arguments/ you can then monitor the logfile. you can also send notifications to other services than mail, see https://containrrr.dev/watchtower/notifications/
  3. watchtower can be run on a schedule with --schedule “0 0 4 * * *” this will run everyday at 4 am, change accordingly if you want to update in the weekend for example (see arguments site above)
1 Like

Before you go deeply:

  • storagenode will stop normal functioning below the minimal version (it will not be paid and stop receive any traffic, then will just stop working)
  • there is a Linux updater almost done
  • all versions will use an autoupdater to synchronize with https://version.storj.io (include docker)

So, I would recommend to use an automatic updates to prevent a loss.

this is not suppose to replace the auto updaters storj gives us… it’s to make them actually useful for those of us which doesn’t trust stuff to be updated when we aren’t around to deal with issues that can arise…

i’m all for auto updates, so long as i can alteast to a limited degree control it… like with windows update before it started getting a mind of it’s own…

we are many that will not accept software that just updates like it wants to… its not a matter of if it will fail, it a matter of when.

not that manual is fool proof, which is why combining the two makes something better…
allow people some manual control, so they can keep an eye on the system when it updates, verify that every is good before and after an update…

and still if people forget, eventually the auto updater will take over and just update…
maybe using a slightly older “stable” release, which have been tested by the more active users…
and thus will be much less likely to contain issues that causes problems during an auto update.

@ general thread
if anyone else got any ideas they think should be included, in the write up for a feature vote, then throw it on this thread…

ill do a write up soon and then let the storj team deal with how it make it all work together… maybe they could setup a mail server for us to communicate with all the storagenode updaters / watch tower…

isn’t watchtower the one people use for docker… why would we need another updater if there already is one?

2 Likes

that can’t happen because it opens the doors for spam. It would need to be completely controlled by storjlabs.

watchtower has some issues, mainly it has root access to your server, which is a security risk many don’t want to take. Also it doesn’t really allow you to update according to your schedule, unless you use the original watchtower and run the container only when you want to update.
Watchtower imho is just an ugly workaround.