A total newbie here. Deployed a StorJ node on TrueNAS Scale. Made a single mistake - hasn’t raised the default 500GB storage limit. Turns out - there’s no way to redeploy (tried changing app’s settings - won’t commit without --force). The only option left on the table was to wipe the app. I kept the volumes intact. Got the same issue - now the app can’t be deployed, saying there’s data there (yeah, like 25GB worth).
Then I wiped out both volumes and redeployed the node, using the same hostname.
The problem is - my new deployment is alive and healthy (node id 12MFET17LEg6Aw1AmnTWHD7xH8hdsgf3nCDEj895uNtpqHsKooL ) and I keep receiving emails regaring the old node every night (node id 1ornTjsXgeC9W8N87dKmxYj7Wt4myRFHF2kMceVhaSvPR4WSEu ).
Since old node’s identify was lost I gues I have no way to decommission the node.
Is there a way to stop these emails coming other than unsubscribing from them entirely?
So far you will receive node offline email then node suspended email then disqualification email. You can delete them for the old node. In total there would be 3 emails as suspension/DQ emails for multiple satellites are linked in same email thread.
Thanks! Will this disqualification affect the current node - since it’s running using a token registered the same email as the old one and using the same hostname?
Or you could edit the config file. Or added command line parameter specifying the size — every parameter can be specified as arguments to the container and it will take precedence over config file.
Best option — set that to a petabyte and control size using dataset quotas.
This is very weird - I can easily update the allocated size in the Edit section of the app (TrueNAS-SCALE-23.10.2/Dragonfish-24.04.2.3 and app Storj-2.0.1; ElectricEel-24.10.0 and Storj app v1.1.10).
You could also use the same identity and storage when installing the app (you need to use the same name unless you provided your own datasets for config and identity, otherwise you just needed to specify their paths), so you wouldn’t have to completely wipe that node.